Test Report: QEMU_macOS 19355

                    
                      6d23947514fd7a389789fed180382829b6444229:2024-08-02:35618
                    
                

Test fail (97/282)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.29
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.98
55 TestCertOptions 10.23
56 TestCertExpiration 195.38
57 TestDockerFlags 10.33
58 TestForceSystemdFlag 10.19
59 TestForceSystemdEnv 12
104 TestFunctional/parallel/ServiceCmdConnect 35.92
176 TestMultiControlPlane/serial/StopSecondaryNode 214.12
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 104
178 TestMultiControlPlane/serial/RestartSecondaryNode 183.75
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.37
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.05
183 TestMultiControlPlane/serial/StopCluster 202.08
184 TestMultiControlPlane/serial/RestartCluster 5.25
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
186 TestMultiControlPlane/serial/AddSecondaryNode 0.07
190 TestImageBuild/serial/Setup 9.92
193 TestJSONOutput/start/Command 9.89
199 TestJSONOutput/pause/Command 0.08
205 TestJSONOutput/unpause/Command 0.04
222 TestMinikubeProfile 10.17
225 TestMountStart/serial/StartWithMountFirst 10.06
228 TestMultiNode/serial/FreshStart2Nodes 10.13
229 TestMultiNode/serial/DeployApp2Nodes 85.33
230 TestMultiNode/serial/PingHostFrom2Pods 0.09
231 TestMultiNode/serial/AddNode 0.07
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.08
234 TestMultiNode/serial/CopyFile 0.06
235 TestMultiNode/serial/StopNode 0.14
236 TestMultiNode/serial/StartAfterStop 43.83
237 TestMultiNode/serial/RestartKeepsNodes 8.44
238 TestMultiNode/serial/DeleteNode 0.1
239 TestMultiNode/serial/StopMultiNode 3.35
240 TestMultiNode/serial/RestartMultiNode 5.25
241 TestMultiNode/serial/ValidateNameConflict 20.17
245 TestPreload 10.01
247 TestScheduledStopUnix 10.1
248 TestSkaffold 12.25
251 TestRunningBinaryUpgrade 610.45
253 TestKubernetesUpgrade 18.7
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.09
267 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.4
269 TestStoppedBinaryUpgrade/Upgrade 572.96
271 TestPause/serial/Start 9.94
281 TestNoKubernetes/serial/StartWithK8s 9.86
282 TestNoKubernetes/serial/StartWithStopK8s 5.29
283 TestNoKubernetes/serial/Start 5.27
287 TestNoKubernetes/serial/StartNoArgs 5.33
289 TestNetworkPlugins/group/auto/Start 10.03
290 TestNetworkPlugins/group/kindnet/Start 9.71
291 TestNetworkPlugins/group/calico/Start 9.89
292 TestNetworkPlugins/group/custom-flannel/Start 9.75
293 TestNetworkPlugins/group/false/Start 9.76
294 TestNetworkPlugins/group/enable-default-cni/Start 9.78
295 TestNetworkPlugins/group/flannel/Start 9.91
296 TestNetworkPlugins/group/bridge/Start 9.96
298 TestNetworkPlugins/group/kubenet/Start 10.07
300 TestStartStop/group/old-k8s-version/serial/FirstStart 10.07
301 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
305 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
307 TestStartStop/group/no-preload/serial/FirstStart 10.03
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
311 TestStartStop/group/old-k8s-version/serial/Pause 0.1
313 TestStartStop/group/embed-certs/serial/FirstStart 10.08
314 TestStartStop/group/no-preload/serial/DeployApp 0.09
315 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
318 TestStartStop/group/embed-certs/serial/DeployApp 0.1
319 TestStartStop/group/no-preload/serial/SecondStart 5.28
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.18
323 TestStartStop/group/embed-certs/serial/SecondStart 6.12
324 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
325 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
326 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
327 TestStartStop/group/no-preload/serial/Pause 0.1
329 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.94
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
333 TestStartStop/group/embed-certs/serial/Pause 0.1
335 TestStartStop/group/newest-cni/serial/FirstStart 9.9
336 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.08
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.74
345 TestStartStop/group/newest-cni/serial/SecondStart 5.25
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
353 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (15.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-200000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-200000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (15.28710625s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6367ee9b-8352-4963-bb6b-a3c7efecc5ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-200000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1ba6b06d-bc4f-4e40-ba4b-4841d6e3fe0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19355"}}
	{"specversion":"1.0","id":"da2ddbc6-b87f-4518-a68e-8c27bb7487f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig"}}
	{"specversion":"1.0","id":"e024a779-a650-4245-b70f-339bb1d6002f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b15cc5d0-c00b-4408-8b26-7c82b88e6702","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"98413eba-ec10-46d0-a7f8-4ab68bcfb661","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube"}}
	{"specversion":"1.0","id":"451024e3-b2aa-4d1f-a5ed-d9f1a48afab7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"399dcbca-ade4-4b56-9528-ffa55d3da5ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"328ba1f6-d2cc-4e1f-b875-3d197b4b6b29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"b079ec82-9ece-4aa4-a6ac-f2f6ea0d3574","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"294da675-85c6-471c-84ee-e2562a5802a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-200000\" primary control-plane node in \"download-only-200000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f4f096b-59d8-4a93-9eea-e0c03156a92d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"146cc31a-e7e5-47af-b2af-4c4f52e21f8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104449a80 0x104449a80 0x104449a80 0x104449a80 0x104449a80 0x104449a80 0x104449a80] Decompressors:map[bz2:0x14000511c90 gz:0x14000511c98 tar:0x14000511c00 tar.bz2:0x14000511c20 tar.gz:0x14000511c50 tar.xz:0x14000511c60 tar.zst:0x14000511c80 tbz2:0x14000511c20 tgz:0x14
000511c50 txz:0x14000511c60 tzst:0x14000511c80 xz:0x14000511ca0 zip:0x14000511cb0 zst:0x14000511ca8] Getters:map[file:0x140002aea80 http:0x140004e6230 https:0x140004e63c0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"a84ff337-7fc0-4087-aa6a-20bfee56c518","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 10:25:28.708663    1749 out.go:291] Setting OutFile to fd 1 ...
	I0802 10:25:28.708818    1749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:25:28.708821    1749 out.go:304] Setting ErrFile to fd 2...
	I0802 10:25:28.708824    1749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:25:28.708980    1749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	W0802 10:25:28.709076    1749 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19355-1243/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19355-1243/.minikube/config/config.json: no such file or directory
	I0802 10:25:28.710348    1749 out.go:298] Setting JSON to true
	I0802 10:25:28.727547    1749 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1491,"bootTime":1722618037,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 10:25:28.727662    1749 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 10:25:28.733227    1749 out.go:97] [download-only-200000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 10:25:28.733391    1749 notify.go:220] Checking for updates...
	W0802 10:25:28.733409    1749 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball: no such file or directory
	I0802 10:25:28.737262    1749 out.go:169] MINIKUBE_LOCATION=19355
	I0802 10:25:28.740292    1749 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 10:25:28.749227    1749 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 10:25:28.757293    1749 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 10:25:28.761254    1749 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	W0802 10:25:28.767301    1749 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0802 10:25:28.767623    1749 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 10:25:28.772320    1749 out.go:97] Using the qemu2 driver based on user configuration
	I0802 10:25:28.772340    1749 start.go:297] selected driver: qemu2
	I0802 10:25:28.772344    1749 start.go:901] validating driver "qemu2" against <nil>
	I0802 10:25:28.772427    1749 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 10:25:28.775218    1749 out.go:169] Automatically selected the socket_vmnet network
	I0802 10:25:28.781111    1749 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0802 10:25:28.781241    1749 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0802 10:25:28.781308    1749 cni.go:84] Creating CNI manager for ""
	I0802 10:25:28.781325    1749 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0802 10:25:28.781384    1749 start.go:340] cluster config:
	{Name:download-only-200000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-200000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 10:25:28.786889    1749 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 10:25:28.790359    1749 out.go:97] Downloading VM boot image ...
	I0802 10:25:28.790378    1749 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso
	I0802 10:25:36.013335    1749 out.go:97] Starting "download-only-200000" primary control-plane node in "download-only-200000" cluster
	I0802 10:25:36.013353    1749 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0802 10:25:36.068787    1749 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0802 10:25:36.068794    1749 cache.go:56] Caching tarball of preloaded images
	I0802 10:25:36.068926    1749 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0802 10:25:36.073024    1749 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0802 10:25:36.073031    1749 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0802 10:25:36.155818    1749 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0802 10:25:42.809293    1749 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0802 10:25:42.809446    1749 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0802 10:25:43.520946    1749 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0802 10:25:43.521153    1749 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/download-only-200000/config.json ...
	I0802 10:25:43.521172    1749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/download-only-200000/config.json: {Name:mk700a421512df1c0b5a01439a4728ae848a7259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 10:25:43.521424    1749 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0802 10:25:43.521626    1749 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0802 10:25:43.919608    1749 out.go:169] 
	W0802 10:25:43.923637    1749 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104449a80 0x104449a80 0x104449a80 0x104449a80 0x104449a80 0x104449a80 0x104449a80] Decompressors:map[bz2:0x14000511c90 gz:0x14000511c98 tar:0x14000511c00 tar.bz2:0x14000511c20 tar.gz:0x14000511c50 tar.xz:0x14000511c60 tar.zst:0x14000511c80 tbz2:0x14000511c20 tgz:0x14000511c50 txz:0x14000511c60 tzst:0x14000511c80 xz:0x14000511ca0 zip:0x14000511cb0 zst:0x14000511ca8] Getters:map[file:0x140002aea80 http:0x140004e6230 https:0x140004e63c0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0802 10:25:43.923663    1749 out_reason.go:110] 
	W0802 10:25:43.932743    1749 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 10:25:43.936601    1749 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-200000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (15.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-953000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-953000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.824606666s)

                                                
                                                
-- stdout --
	* [offline-docker-953000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-953000" primary control-plane node in "offline-docker-953000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-953000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:03:45.888886    4270 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:03:45.889042    4270 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:03:45.889045    4270 out.go:304] Setting ErrFile to fd 2...
	I0802 11:03:45.889047    4270 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:03:45.889181    4270 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:03:45.890640    4270 out.go:298] Setting JSON to false
	I0802 11:03:45.908599    4270 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3789,"bootTime":1722618036,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:03:45.908677    4270 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:03:45.913828    4270 out.go:177] * [offline-docker-953000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:03:45.920822    4270 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:03:45.920839    4270 notify.go:220] Checking for updates...
	I0802 11:03:45.927846    4270 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:03:45.930883    4270 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:03:45.933798    4270 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:03:45.936820    4270 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:03:45.939722    4270 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:03:45.943167    4270 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:03:45.943236    4270 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:03:45.946811    4270 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:03:45.953867    4270 start.go:297] selected driver: qemu2
	I0802 11:03:45.953878    4270 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:03:45.953885    4270 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:03:45.955810    4270 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:03:45.958811    4270 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:03:45.960024    4270 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:03:45.960060    4270 cni.go:84] Creating CNI manager for ""
	I0802 11:03:45.960068    4270 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:03:45.960072    4270 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 11:03:45.960105    4270 start.go:340] cluster config:
	{Name:offline-docker-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:03:45.963828    4270 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:03:45.970837    4270 out.go:177] * Starting "offline-docker-953000" primary control-plane node in "offline-docker-953000" cluster
	I0802 11:03:45.974732    4270 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:03:45.974755    4270 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:03:45.974765    4270 cache.go:56] Caching tarball of preloaded images
	I0802 11:03:45.974822    4270 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:03:45.974828    4270 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:03:45.974889    4270 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/offline-docker-953000/config.json ...
	I0802 11:03:45.974903    4270 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/offline-docker-953000/config.json: {Name:mk82142facbaa72164217ceff13db431d59dc09d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:03:45.975184    4270 start.go:360] acquireMachinesLock for offline-docker-953000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:03:45.975236    4270 start.go:364] duration metric: took 44.5µs to acquireMachinesLock for "offline-docker-953000"
	I0802 11:03:45.975248    4270 start.go:93] Provisioning new machine with config: &{Name:offline-docker-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:03:45.975281    4270 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:03:45.979855    4270 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0802 11:03:45.995600    4270 start.go:159] libmachine.API.Create for "offline-docker-953000" (driver="qemu2")
	I0802 11:03:45.995641    4270 client.go:168] LocalClient.Create starting
	I0802 11:03:45.995717    4270 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:03:45.995746    4270 main.go:141] libmachine: Decoding PEM data...
	I0802 11:03:45.995756    4270 main.go:141] libmachine: Parsing certificate...
	I0802 11:03:45.995800    4270 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:03:45.995823    4270 main.go:141] libmachine: Decoding PEM data...
	I0802 11:03:45.995832    4270 main.go:141] libmachine: Parsing certificate...
	I0802 11:03:45.996218    4270 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:03:46.148216    4270 main.go:141] libmachine: Creating SSH key...
	I0802 11:03:46.198243    4270 main.go:141] libmachine: Creating Disk image...
	I0802 11:03:46.198258    4270 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:03:46.198487    4270 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/disk.qcow2
	I0802 11:03:46.208997    4270 main.go:141] libmachine: STDOUT: 
	I0802 11:03:46.209021    4270 main.go:141] libmachine: STDERR: 
	I0802 11:03:46.209077    4270 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/disk.qcow2 +20000M
	I0802 11:03:46.323901    4270 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:03:46.323934    4270 main.go:141] libmachine: STDERR: 
	I0802 11:03:46.323975    4270 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/disk.qcow2
	I0802 11:03:46.323985    4270 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:03:46.324004    4270 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:03:46.324058    4270 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:88:f2:2a:15:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/disk.qcow2
	I0802 11:03:46.326909    4270 main.go:141] libmachine: STDOUT: 
	I0802 11:03:46.326939    4270 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:03:46.326971    4270 client.go:171] duration metric: took 331.332208ms to LocalClient.Create
	I0802 11:03:48.329043    4270 start.go:128] duration metric: took 2.353802125s to createHost
	I0802 11:03:48.329099    4270 start.go:83] releasing machines lock for "offline-docker-953000", held for 2.353940667s
	W0802 11:03:48.329116    4270 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:03:48.347732    4270 out.go:177] * Deleting "offline-docker-953000" in qemu2 ...
	W0802 11:03:48.367250    4270 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:03:48.367260    4270 start.go:729] Will try again in 5 seconds ...
	I0802 11:03:53.369217    4270 start.go:360] acquireMachinesLock for offline-docker-953000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:03:53.369391    4270 start.go:364] duration metric: took 110.667µs to acquireMachinesLock for "offline-docker-953000"
	I0802 11:03:53.369444    4270 start.go:93] Provisioning new machine with config: &{Name:offline-docker-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:03:53.369548    4270 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:03:53.386803    4270 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0802 11:03:53.403583    4270 start.go:159] libmachine.API.Create for "offline-docker-953000" (driver="qemu2")
	I0802 11:03:53.403610    4270 client.go:168] LocalClient.Create starting
	I0802 11:03:53.403685    4270 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:03:53.403723    4270 main.go:141] libmachine: Decoding PEM data...
	I0802 11:03:53.403734    4270 main.go:141] libmachine: Parsing certificate...
	I0802 11:03:53.403768    4270 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:03:53.403794    4270 main.go:141] libmachine: Decoding PEM data...
	I0802 11:03:53.403802    4270 main.go:141] libmachine: Parsing certificate...
	I0802 11:03:53.404144    4270 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:03:53.560542    4270 main.go:141] libmachine: Creating SSH key...
	I0802 11:03:53.618018    4270 main.go:141] libmachine: Creating Disk image...
	I0802 11:03:53.618030    4270 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:03:53.618262    4270 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/disk.qcow2
	I0802 11:03:53.627820    4270 main.go:141] libmachine: STDOUT: 
	I0802 11:03:53.627846    4270 main.go:141] libmachine: STDERR: 
	I0802 11:03:53.627912    4270 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/disk.qcow2 +20000M
	I0802 11:03:53.638164    4270 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:03:53.638191    4270 main.go:141] libmachine: STDERR: 
	I0802 11:03:53.638206    4270 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/disk.qcow2
	I0802 11:03:53.638211    4270 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:03:53.638217    4270 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:03:53.638253    4270 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:55:cd:84:bf:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/offline-docker-953000/disk.qcow2
	I0802 11:03:53.640048    4270 main.go:141] libmachine: STDOUT: 
	I0802 11:03:53.640066    4270 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:03:53.640078    4270 client.go:171] duration metric: took 236.473541ms to LocalClient.Create
	I0802 11:03:55.642216    4270 start.go:128] duration metric: took 2.27271925s to createHost
	I0802 11:03:55.642291    4270 start.go:83] releasing machines lock for "offline-docker-953000", held for 2.272955791s
	W0802 11:03:55.642659    4270 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:03:55.652316    4270 out.go:177] 
	W0802 11:03:55.656396    4270 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:03:55.656423    4270 out.go:239] * 
	* 
	W0802 11:03:55.658929    4270 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:03:55.669178    4270 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-953000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-02 11:03:55.685044 -0700 PDT m=+2307.194858667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-953000 -n offline-docker-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-953000 -n offline-docker-953000: exit status 7 (71.458334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-953000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-953000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-953000
--- FAIL: TestOffline (9.98s)

                                                
                                    
x
+
TestCertOptions (10.23s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-479000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-479000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.965686417s)

                                                
                                                
-- stdout --
	* [cert-options-479000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-479000" primary control-plane node in "cert-options-479000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-479000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-479000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-479000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-479000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-479000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (81.754625ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-479000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-479000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-479000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-479000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-479000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-479000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.646667ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-479000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-479000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-479000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-479000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-479000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-02 11:04:28.288121 -0700 PDT m=+2339.799088834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-479000 -n cert-options-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-479000 -n cert-options-479000: exit status 7 (30.35725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-479000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-479000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-479000
--- FAIL: TestCertOptions (10.23s)

                                                
                                    
x
+
TestCertExpiration (195.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-630000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-630000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.009815792s)

                                                
                                                
-- stdout --
	* [cert-expiration-630000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-630000" primary control-plane node in "cert-expiration-630000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-630000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-630000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-630000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-630000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-630000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.222035209s)

                                                
                                                
-- stdout --
	* [cert-expiration-630000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-630000" primary control-plane node in "cert-expiration-630000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-630000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-630000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-630000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-630000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-630000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-630000" primary control-plane node in "cert-expiration-630000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-630000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-630000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-630000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-02 11:07:28.238382 -0700 PDT m=+2519.755724542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-630000 -n cert-expiration-630000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-630000 -n cert-expiration-630000: exit status 7 (63.837875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-630000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-630000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-630000
--- FAIL: TestCertExpiration (195.38s)

                                                
                                    
x
+
TestDockerFlags (10.33s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-256000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-256000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.104565625s)

                                                
                                                
-- stdout --
	* [docker-flags-256000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-256000" primary control-plane node in "docker-flags-256000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-256000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:04:07.864514    4460 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:04:07.864688    4460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:04:07.864691    4460 out.go:304] Setting ErrFile to fd 2...
	I0802 11:04:07.864694    4460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:04:07.864861    4460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:04:07.865974    4460 out.go:298] Setting JSON to false
	I0802 11:04:07.882113    4460 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3811,"bootTime":1722618036,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:04:07.882176    4460 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:04:07.886409    4460 out.go:177] * [docker-flags-256000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:04:07.894324    4460 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:04:07.894379    4460 notify.go:220] Checking for updates...
	I0802 11:04:07.901299    4460 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:04:07.904403    4460 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:04:07.907317    4460 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:04:07.910339    4460 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:04:07.913270    4460 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:04:07.916590    4460 config.go:182] Loaded profile config "force-systemd-flag-329000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:04:07.916657    4460 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:04:07.916703    4460 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:04:07.921276    4460 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:04:07.928317    4460 start.go:297] selected driver: qemu2
	I0802 11:04:07.928322    4460 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:04:07.928328    4460 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:04:07.930663    4460 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:04:07.934410    4460 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:04:07.938385    4460 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0802 11:04:07.938434    4460 cni.go:84] Creating CNI manager for ""
	I0802 11:04:07.938443    4460 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:04:07.938452    4460 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 11:04:07.938481    4460 start.go:340] cluster config:
	{Name:docker-flags-256000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:04:07.942159    4460 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:04:07.949185    4460 out.go:177] * Starting "docker-flags-256000" primary control-plane node in "docker-flags-256000" cluster
	I0802 11:04:07.953307    4460 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:04:07.953319    4460 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:04:07.953330    4460 cache.go:56] Caching tarball of preloaded images
	I0802 11:04:07.953380    4460 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:04:07.953386    4460 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:04:07.953441    4460 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/docker-flags-256000/config.json ...
	I0802 11:04:07.953456    4460 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/docker-flags-256000/config.json: {Name:mkc141073d80b63d85daec7149d7956a4edd872b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:04:07.953670    4460 start.go:360] acquireMachinesLock for docker-flags-256000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:04:07.953712    4460 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "docker-flags-256000"
	I0802 11:04:07.953723    4460 start.go:93] Provisioning new machine with config: &{Name:docker-flags-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:04:07.953759    4460 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:04:07.961270    4460 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0802 11:04:07.979180    4460 start.go:159] libmachine.API.Create for "docker-flags-256000" (driver="qemu2")
	I0802 11:04:07.979206    4460 client.go:168] LocalClient.Create starting
	I0802 11:04:07.979278    4460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:04:07.979311    4460 main.go:141] libmachine: Decoding PEM data...
	I0802 11:04:07.979320    4460 main.go:141] libmachine: Parsing certificate...
	I0802 11:04:07.979361    4460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:04:07.979385    4460 main.go:141] libmachine: Decoding PEM data...
	I0802 11:04:07.979393    4460 main.go:141] libmachine: Parsing certificate...
	I0802 11:04:07.979751    4460 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:04:08.132147    4460 main.go:141] libmachine: Creating SSH key...
	I0802 11:04:08.231240    4460 main.go:141] libmachine: Creating Disk image...
	I0802 11:04:08.231245    4460 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:04:08.231451    4460 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/disk.qcow2
	I0802 11:04:08.240787    4460 main.go:141] libmachine: STDOUT: 
	I0802 11:04:08.240803    4460 main.go:141] libmachine: STDERR: 
	I0802 11:04:08.240845    4460 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/disk.qcow2 +20000M
	I0802 11:04:08.248677    4460 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:04:08.248701    4460 main.go:141] libmachine: STDERR: 
	I0802 11:04:08.248715    4460 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/disk.qcow2
	I0802 11:04:08.248721    4460 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:04:08.248731    4460 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:04:08.248762    4460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:52:67:42:21:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/disk.qcow2
	I0802 11:04:08.250385    4460 main.go:141] libmachine: STDOUT: 
	I0802 11:04:08.250401    4460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:04:08.250421    4460 client.go:171] duration metric: took 271.218584ms to LocalClient.Create
	I0802 11:04:10.252521    4460 start.go:128] duration metric: took 2.298820458s to createHost
	I0802 11:04:10.252587    4460 start.go:83] releasing machines lock for "docker-flags-256000", held for 2.29894725s
	W0802 11:04:10.252648    4460 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:04:10.265292    4460 out.go:177] * Deleting "docker-flags-256000" in qemu2 ...
	W0802 11:04:10.295287    4460 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:04:10.295316    4460 start.go:729] Will try again in 5 seconds ...
	I0802 11:04:15.297310    4460 start.go:360] acquireMachinesLock for docker-flags-256000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:04:15.393062    4460 start.go:364] duration metric: took 95.604958ms to acquireMachinesLock for "docker-flags-256000"
	I0802 11:04:15.393216    4460 start.go:93] Provisioning new machine with config: &{Name:docker-flags-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:04:15.393507    4460 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:04:15.402111    4460 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0802 11:04:15.450875    4460 start.go:159] libmachine.API.Create for "docker-flags-256000" (driver="qemu2")
	I0802 11:04:15.450922    4460 client.go:168] LocalClient.Create starting
	I0802 11:04:15.451039    4460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:04:15.451099    4460 main.go:141] libmachine: Decoding PEM data...
	I0802 11:04:15.451119    4460 main.go:141] libmachine: Parsing certificate...
	I0802 11:04:15.451194    4460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:04:15.451240    4460 main.go:141] libmachine: Decoding PEM data...
	I0802 11:04:15.451250    4460 main.go:141] libmachine: Parsing certificate...
	I0802 11:04:15.451954    4460 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:04:15.613497    4460 main.go:141] libmachine: Creating SSH key...
	I0802 11:04:15.865571    4460 main.go:141] libmachine: Creating Disk image...
	I0802 11:04:15.865583    4460 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:04:15.865800    4460 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/disk.qcow2
	I0802 11:04:15.875517    4460 main.go:141] libmachine: STDOUT: 
	I0802 11:04:15.875540    4460 main.go:141] libmachine: STDERR: 
	I0802 11:04:15.875587    4460 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/disk.qcow2 +20000M
	I0802 11:04:15.883496    4460 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:04:15.883523    4460 main.go:141] libmachine: STDERR: 
	I0802 11:04:15.883538    4460 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/disk.qcow2
	I0802 11:04:15.883544    4460 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:04:15.883551    4460 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:04:15.883583    4460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:69:fd:70:b2:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/docker-flags-256000/disk.qcow2
	I0802 11:04:15.885264    4460 main.go:141] libmachine: STDOUT: 
	I0802 11:04:15.885278    4460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:04:15.885294    4460 client.go:171] duration metric: took 434.382625ms to LocalClient.Create
	I0802 11:04:17.887501    4460 start.go:128] duration metric: took 2.493981291s to createHost
	I0802 11:04:17.887582    4460 start.go:83] releasing machines lock for "docker-flags-256000", held for 2.494537583s
	W0802 11:04:17.888065    4460 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-256000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-256000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:04:17.906895    4460 out.go:177] 
	W0802 11:04:17.914606    4460 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:04:17.914632    4460 out.go:239] * 
	* 
	W0802 11:04:17.917230    4460 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:04:17.926694    4460 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-256000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-256000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-256000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (72.756166ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-256000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-256000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-256000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-256000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-256000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-256000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-256000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-256000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-256000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.62475ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-256000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-256000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-256000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-256000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-256000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-256000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-02 11:04:18.061591 -0700 PDT m=+2329.572196501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-256000 -n docker-flags-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-256000 -n docker-flags-256000: exit status 7 (29.070125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-256000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-256000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-256000
--- FAIL: TestDockerFlags (10.33s)

                                                
                                    
x
+
TestForceSystemdFlag (10.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-329000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-329000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.0038135s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-329000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-329000" primary control-plane node in "force-systemd-flag-329000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-329000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:04:02.843223    4438 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:04:02.843350    4438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:04:02.843352    4438 out.go:304] Setting ErrFile to fd 2...
	I0802 11:04:02.843355    4438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:04:02.843493    4438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:04:02.844554    4438 out.go:298] Setting JSON to false
	I0802 11:04:02.860670    4438 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3806,"bootTime":1722618036,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:04:02.860731    4438 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:04:02.865188    4438 out.go:177] * [force-systemd-flag-329000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:04:02.872541    4438 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:04:02.872580    4438 notify.go:220] Checking for updates...
	I0802 11:04:02.881441    4438 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:04:02.885396    4438 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:04:02.888506    4438 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:04:02.891466    4438 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:04:02.894470    4438 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:04:02.897810    4438 config.go:182] Loaded profile config "force-systemd-env-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:04:02.897882    4438 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:04:02.897926    4438 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:04:02.901475    4438 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:04:02.908462    4438 start.go:297] selected driver: qemu2
	I0802 11:04:02.908467    4438 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:04:02.908473    4438 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:04:02.910729    4438 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:04:02.914430    4438 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:04:02.917519    4438 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0802 11:04:02.917547    4438 cni.go:84] Creating CNI manager for ""
	I0802 11:04:02.917555    4438 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:04:02.917564    4438 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 11:04:02.917590    4438 start.go:340] cluster config:
	{Name:force-systemd-flag-329000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-329000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:04:02.921330    4438 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:04:02.928473    4438 out.go:177] * Starting "force-systemd-flag-329000" primary control-plane node in "force-systemd-flag-329000" cluster
	I0802 11:04:02.932411    4438 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:04:02.932424    4438 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:04:02.932436    4438 cache.go:56] Caching tarball of preloaded images
	I0802 11:04:02.932489    4438 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:04:02.932495    4438 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:04:02.932548    4438 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/force-systemd-flag-329000/config.json ...
	I0802 11:04:02.932559    4438 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/force-systemd-flag-329000/config.json: {Name:mk591fe18a080b9f45f55d7a103a031574b2d748 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:04:02.932781    4438 start.go:360] acquireMachinesLock for force-systemd-flag-329000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:04:02.932821    4438 start.go:364] duration metric: took 29.416µs to acquireMachinesLock for "force-systemd-flag-329000"
	I0802 11:04:02.932833    4438 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-329000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-329000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:04:02.932862    4438 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:04:02.941448    4438 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0802 11:04:02.959191    4438 start.go:159] libmachine.API.Create for "force-systemd-flag-329000" (driver="qemu2")
	I0802 11:04:02.959219    4438 client.go:168] LocalClient.Create starting
	I0802 11:04:02.959284    4438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:04:02.959314    4438 main.go:141] libmachine: Decoding PEM data...
	I0802 11:04:02.959321    4438 main.go:141] libmachine: Parsing certificate...
	I0802 11:04:02.959360    4438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:04:02.959385    4438 main.go:141] libmachine: Decoding PEM data...
	I0802 11:04:02.959393    4438 main.go:141] libmachine: Parsing certificate...
	I0802 11:04:02.959755    4438 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:04:03.116738    4438 main.go:141] libmachine: Creating SSH key...
	I0802 11:04:03.244737    4438 main.go:141] libmachine: Creating Disk image...
	I0802 11:04:03.244744    4438 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:04:03.244975    4438 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/disk.qcow2
	I0802 11:04:03.254511    4438 main.go:141] libmachine: STDOUT: 
	I0802 11:04:03.254530    4438 main.go:141] libmachine: STDERR: 
	I0802 11:04:03.254586    4438 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/disk.qcow2 +20000M
	I0802 11:04:03.262434    4438 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:04:03.262450    4438 main.go:141] libmachine: STDERR: 
	I0802 11:04:03.262464    4438 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/disk.qcow2
	I0802 11:04:03.262470    4438 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:04:03.262489    4438 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:04:03.262547    4438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:5b:c3:fd:17:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/disk.qcow2
	I0802 11:04:03.264177    4438 main.go:141] libmachine: STDOUT: 
	I0802 11:04:03.264192    4438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:04:03.264209    4438 client.go:171] duration metric: took 304.9955ms to LocalClient.Create
	I0802 11:04:05.266352    4438 start.go:128] duration metric: took 2.333543042s to createHost
	I0802 11:04:05.266456    4438 start.go:83] releasing machines lock for "force-systemd-flag-329000", held for 2.333707792s
	W0802 11:04:05.266619    4438 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:04:05.291864    4438 out.go:177] * Deleting "force-systemd-flag-329000" in qemu2 ...
	W0802 11:04:05.315713    4438 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:04:05.315759    4438 start.go:729] Will try again in 5 seconds ...
	I0802 11:04:10.317754    4438 start.go:360] acquireMachinesLock for force-systemd-flag-329000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:04:10.318134    4438 start.go:364] duration metric: took 263.75µs to acquireMachinesLock for "force-systemd-flag-329000"
	I0802 11:04:10.318226    4438 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-329000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-329000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:04:10.318469    4438 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:04:10.327101    4438 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0802 11:04:10.377124    4438 start.go:159] libmachine.API.Create for "force-systemd-flag-329000" (driver="qemu2")
	I0802 11:04:10.377171    4438 client.go:168] LocalClient.Create starting
	I0802 11:04:10.377295    4438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:04:10.377354    4438 main.go:141] libmachine: Decoding PEM data...
	I0802 11:04:10.377370    4438 main.go:141] libmachine: Parsing certificate...
	I0802 11:04:10.377426    4438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:04:10.377470    4438 main.go:141] libmachine: Decoding PEM data...
	I0802 11:04:10.377480    4438 main.go:141] libmachine: Parsing certificate...
	I0802 11:04:10.378269    4438 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:04:10.551618    4438 main.go:141] libmachine: Creating SSH key...
	I0802 11:04:10.754669    4438 main.go:141] libmachine: Creating Disk image...
	I0802 11:04:10.754677    4438 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:04:10.754904    4438 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/disk.qcow2
	I0802 11:04:10.764799    4438 main.go:141] libmachine: STDOUT: 
	I0802 11:04:10.764817    4438 main.go:141] libmachine: STDERR: 
	I0802 11:04:10.764863    4438 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/disk.qcow2 +20000M
	I0802 11:04:10.772810    4438 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:04:10.772832    4438 main.go:141] libmachine: STDERR: 
	I0802 11:04:10.772846    4438 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/disk.qcow2
	I0802 11:04:10.772849    4438 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:04:10.772858    4438 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:04:10.772894    4438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:4d:9c:01:94:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-flag-329000/disk.qcow2
	I0802 11:04:10.774574    4438 main.go:141] libmachine: STDOUT: 
	I0802 11:04:10.774588    4438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:04:10.774599    4438 client.go:171] duration metric: took 397.436667ms to LocalClient.Create
	I0802 11:04:12.776696    4438 start.go:128] duration metric: took 2.458288083s to createHost
	I0802 11:04:12.776760    4438 start.go:83] releasing machines lock for "force-systemd-flag-329000", held for 2.458690209s
	W0802 11:04:12.777072    4438 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-329000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-329000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:04:12.791771    4438 out.go:177] 
	W0802 11:04:12.794814    4438 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:04:12.794855    4438 out.go:239] * 
	* 
	W0802 11:04:12.797460    4438 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:04:12.804719    4438 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-329000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-329000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-329000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (74.625708ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-329000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-329000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-329000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-02 11:04:12.896804 -0700 PDT m=+2324.407226501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-329000 -n force-systemd-flag-329000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-329000 -n force-systemd-flag-329000: exit status 7 (34.098833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-329000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-329000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-329000
--- FAIL: TestForceSystemdFlag (10.19s)

                                                
                                    
x
+
TestForceSystemdEnv (12s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-500000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-500000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.80788425s)

                                                
                                                
-- stdout --
	* [force-systemd-env-500000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-500000" primary control-plane node in "force-systemd-env-500000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-500000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:03:55.865703    4406 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:03:55.865857    4406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:03:55.865861    4406 out.go:304] Setting ErrFile to fd 2...
	I0802 11:03:55.865863    4406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:03:55.866012    4406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:03:55.867015    4406 out.go:298] Setting JSON to false
	I0802 11:03:55.883471    4406 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3799,"bootTime":1722618036,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:03:55.883540    4406 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:03:55.889635    4406 out.go:177] * [force-systemd-env-500000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:03:55.897602    4406 notify.go:220] Checking for updates...
	I0802 11:03:55.902674    4406 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:03:55.905653    4406 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:03:55.908676    4406 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:03:55.912614    4406 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:03:55.915677    4406 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:03:55.918666    4406 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0802 11:03:55.921900    4406 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:03:55.921947    4406 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:03:55.926604    4406 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:03:55.933572    4406 start.go:297] selected driver: qemu2
	I0802 11:03:55.933578    4406 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:03:55.933583    4406 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:03:55.936022    4406 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:03:55.938594    4406 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:03:55.941592    4406 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0802 11:03:55.941623    4406 cni.go:84] Creating CNI manager for ""
	I0802 11:03:55.941630    4406 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:03:55.941635    4406 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 11:03:55.941664    4406 start.go:340] cluster config:
	{Name:force-systemd-env-500000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-500000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:03:55.945610    4406 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:03:55.953434    4406 out.go:177] * Starting "force-systemd-env-500000" primary control-plane node in "force-systemd-env-500000" cluster
	I0802 11:03:55.957720    4406 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:03:55.957739    4406 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:03:55.957757    4406 cache.go:56] Caching tarball of preloaded images
	I0802 11:03:55.957827    4406 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:03:55.957841    4406 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:03:55.957895    4406 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/force-systemd-env-500000/config.json ...
	I0802 11:03:55.957907    4406 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/force-systemd-env-500000/config.json: {Name:mk6fe890319d418fee8d61aff2987305da608cf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:03:55.958121    4406 start.go:360] acquireMachinesLock for force-systemd-env-500000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:03:55.958156    4406 start.go:364] duration metric: took 28.458µs to acquireMachinesLock for "force-systemd-env-500000"
	I0802 11:03:55.958167    4406 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-500000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:03:55.958190    4406 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:03:55.964551    4406 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0802 11:03:55.981446    4406 start.go:159] libmachine.API.Create for "force-systemd-env-500000" (driver="qemu2")
	I0802 11:03:55.981479    4406 client.go:168] LocalClient.Create starting
	I0802 11:03:55.981543    4406 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:03:55.981574    4406 main.go:141] libmachine: Decoding PEM data...
	I0802 11:03:55.981585    4406 main.go:141] libmachine: Parsing certificate...
	I0802 11:03:55.981622    4406 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:03:55.981644    4406 main.go:141] libmachine: Decoding PEM data...
	I0802 11:03:55.981654    4406 main.go:141] libmachine: Parsing certificate...
	I0802 11:03:55.982006    4406 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:03:56.134160    4406 main.go:141] libmachine: Creating SSH key...
	I0802 11:03:56.206724    4406 main.go:141] libmachine: Creating Disk image...
	I0802 11:03:56.206730    4406 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:03:56.206935    4406 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/disk.qcow2
	I0802 11:03:56.216095    4406 main.go:141] libmachine: STDOUT: 
	I0802 11:03:56.216111    4406 main.go:141] libmachine: STDERR: 
	I0802 11:03:56.216167    4406 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/disk.qcow2 +20000M
	I0802 11:03:56.223992    4406 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:03:56.224005    4406 main.go:141] libmachine: STDERR: 
	I0802 11:03:56.224019    4406 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/disk.qcow2
	I0802 11:03:56.224024    4406 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:03:56.224050    4406 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:03:56.224076    4406 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:aa:42:e2:3f:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/disk.qcow2
	I0802 11:03:56.225635    4406 main.go:141] libmachine: STDOUT: 
	I0802 11:03:56.225651    4406 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:03:56.225669    4406 client.go:171] duration metric: took 244.193375ms to LocalClient.Create
	I0802 11:03:58.227743    4406 start.go:128] duration metric: took 2.269616875s to createHost
	I0802 11:03:58.227799    4406 start.go:83] releasing machines lock for "force-systemd-env-500000", held for 2.269713792s
	W0802 11:03:58.227922    4406 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:03:58.234526    4406 out.go:177] * Deleting "force-systemd-env-500000" in qemu2 ...
	W0802 11:03:58.260838    4406 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:03:58.260864    4406 start.go:729] Will try again in 5 seconds ...
	I0802 11:04:03.262011    4406 start.go:360] acquireMachinesLock for force-systemd-env-500000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:04:05.266681    4406 start.go:364] duration metric: took 2.004698459s to acquireMachinesLock for "force-systemd-env-500000"
	I0802 11:04:05.266895    4406 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-500000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:04:05.267155    4406 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:04:05.281906    4406 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0802 11:04:05.329781    4406 start.go:159] libmachine.API.Create for "force-systemd-env-500000" (driver="qemu2")
	I0802 11:04:05.329831    4406 client.go:168] LocalClient.Create starting
	I0802 11:04:05.329943    4406 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:04:05.330008    4406 main.go:141] libmachine: Decoding PEM data...
	I0802 11:04:05.330025    4406 main.go:141] libmachine: Parsing certificate...
	I0802 11:04:05.330090    4406 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:04:05.330135    4406 main.go:141] libmachine: Decoding PEM data...
	I0802 11:04:05.330153    4406 main.go:141] libmachine: Parsing certificate...
	I0802 11:04:05.330619    4406 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:04:05.493248    4406 main.go:141] libmachine: Creating SSH key...
	I0802 11:04:05.568611    4406 main.go:141] libmachine: Creating Disk image...
	I0802 11:04:05.568616    4406 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:04:05.568833    4406 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/disk.qcow2
	I0802 11:04:05.578443    4406 main.go:141] libmachine: STDOUT: 
	I0802 11:04:05.578462    4406 main.go:141] libmachine: STDERR: 
	I0802 11:04:05.578519    4406 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/disk.qcow2 +20000M
	I0802 11:04:05.586369    4406 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:04:05.586392    4406 main.go:141] libmachine: STDERR: 
	I0802 11:04:05.586408    4406 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/disk.qcow2
	I0802 11:04:05.586412    4406 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:04:05.586418    4406 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:04:05.586444    4406 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:60:91:1d:61:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/force-systemd-env-500000/disk.qcow2
	I0802 11:04:05.587989    4406 main.go:141] libmachine: STDOUT: 
	I0802 11:04:05.588013    4406 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:04:05.588031    4406 client.go:171] duration metric: took 258.204041ms to LocalClient.Create
	I0802 11:04:07.588355    4406 start.go:128] duration metric: took 2.321210666s to createHost
	I0802 11:04:07.588475    4406 start.go:83] releasing machines lock for "force-systemd-env-500000", held for 2.321821416s
	W0802 11:04:07.588801    4406 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-500000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-500000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:04:07.608542    4406 out.go:177] 
	W0802 11:04:07.616427    4406 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:04:07.616470    4406 out.go:239] * 
	* 
	W0802 11:04:07.619011    4406 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:04:07.629312    4406 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-500000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-500000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-500000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.51ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-500000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-500000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-500000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-02 11:04:07.726859 -0700 PDT m=+2319.237099001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-500000 -n force-systemd-env-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-500000 -n force-systemd-env-500000: exit status 7 (33.397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-500000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-500000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-500000
--- FAIL: TestForceSystemdEnv (12.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (35.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-775000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-775000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-rcrzx" [06fb08ca-3d7c-4650-b35f-ab095891c331] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-rcrzx" [06fb08ca-3d7c-4650-b35f-ab095891c331] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.003987583s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:32226
functional_test.go:1657: error fetching http://192.168.105.4:32226: Get "http://192.168.105.4:32226": dial tcp 192.168.105.4:32226: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32226: Get "http://192.168.105.4:32226": dial tcp 192.168.105.4:32226: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32226: Get "http://192.168.105.4:32226": dial tcp 192.168.105.4:32226: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32226: Get "http://192.168.105.4:32226": dial tcp 192.168.105.4:32226: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32226: Get "http://192.168.105.4:32226": dial tcp 192.168.105.4:32226: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32226: Get "http://192.168.105.4:32226": dial tcp 192.168.105.4:32226: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32226: Get "http://192.168.105.4:32226": dial tcp 192.168.105.4:32226: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32226: Get "http://192.168.105.4:32226": dial tcp 192.168.105.4:32226: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:32226: Get "http://192.168.105.4:32226": dial tcp 192.168.105.4:32226: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-775000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-rcrzx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-775000/192.168.105.4
Start Time:       Fri, 02 Aug 2024 10:37:30 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://b8ff7e42e55e21d5595e421496a3c8e393869b4c2397511b8a87002db4db2dd1
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Fri, 02 Aug 2024 10:37:42 -0700
Finished:     Fri, 02 Aug 2024 10:37:42 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8rvbg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8rvbg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  35s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-rcrzx to functional-775000
Normal   Pulled     23s (x3 over 35s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    23s (x3 over 35s)  kubelet            Created container echoserver-arm
Normal   Started    23s (x3 over 35s)  kubelet            Started container echoserver-arm
Warning  BackOff    11s (x3 over 34s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-rcrzx_default(06fb08ca-3d7c-4650-b35f-ab095891c331)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-775000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-775000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.96.71
IPs:                      10.111.96.71
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32226/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-775000 -n functional-775000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount     | -p functional-775000                                                                                                | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:37 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1540775981/001:/mount-9p     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-775000 ssh findmnt                                                                                       | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:37 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-775000 ssh findmnt                                                                                       | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:37 PDT | 02 Aug 24 10:37 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-775000 ssh -- ls                                                                                         | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:37 PDT | 02 Aug 24 10:37 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-775000 ssh cat                                                                                           | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:37 PDT | 02 Aug 24 10:37 PDT |
	|           | /mount-9p/test-1722620272675951000                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-775000 ssh stat                                                                                          | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:37 PDT | 02 Aug 24 10:37 PDT |
	|           | /mount-9p/created-by-test                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-775000 ssh stat                                                                                          | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:37 PDT | 02 Aug 24 10:37 PDT |
	|           | /mount-9p/created-by-pod                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-775000 ssh sudo                                                                                          | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:37 PDT | 02 Aug 24 10:37 PDT |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-775000 ssh findmnt                                                                                       | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-775000                                                                                                | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port980471019/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-775000 ssh findmnt                                                                                       | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT | 02 Aug 24 10:38 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-775000 ssh -- ls                                                                                         | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT | 02 Aug 24 10:38 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-775000 ssh sudo                                                                                          | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT |                     |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-775000                                                                                                | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1012216766/001:/mount1  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-775000                                                                                                | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1012216766/001:/mount2  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-775000                                                                                                | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1012216766/001:/mount3  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-775000 ssh findmnt                                                                                       | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-775000 ssh findmnt                                                                                       | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT | 02 Aug 24 10:38 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-775000 ssh findmnt                                                                                       | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT | 02 Aug 24 10:38 PDT |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-775000 ssh findmnt                                                                                       | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT | 02 Aug 24 10:38 PDT |
	|           | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| mount     | -p functional-775000                                                                                                | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT |                     |
	|           | --kill=true                                                                                                         |                   |         |         |                     |                     |
	| start     | -p functional-775000                                                                                                | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-775000 --dry-run                                                                                      | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-775000                                                                                                | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                  | functional-775000 | jenkins | v1.33.1 | 02 Aug 24 10:38 PDT |                     |
	|           | -p functional-775000                                                                                                |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 10:38:01
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 10:38:01.943168    2844 out.go:291] Setting OutFile to fd 1 ...
	I0802 10:38:01.943275    2844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:38:01.943279    2844 out.go:304] Setting ErrFile to fd 2...
	I0802 10:38:01.943281    2844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:38:01.943405    2844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 10:38:01.944678    2844 out.go:298] Setting JSON to false
	I0802 10:38:01.961973    2844 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2244,"bootTime":1722618037,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 10:38:01.962063    2844 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 10:38:01.967268    2844 out.go:177] * [functional-775000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0802 10:38:01.975377    2844 notify.go:220] Checking for updates...
	I0802 10:38:01.978201    2844 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 10:38:01.982278    2844 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 10:38:01.983550    2844 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 10:38:01.986200    2844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 10:38:01.989257    2844 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 10:38:01.992255    2844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 10:38:01.995491    2844 config.go:182] Loaded profile config "functional-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 10:38:01.995752    2844 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 10:38:02.000240    2844 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0802 10:38:02.007243    2844 start.go:297] selected driver: qemu2
	I0802 10:38:02.007250    2844 start.go:901] validating driver "qemu2" against &{Name:functional-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-775000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 10:38:02.007316    2844 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 10:38:02.013212    2844 out.go:177] 
	W0802 10:38:02.017234    2844 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0802 10:38:02.023163    2844 out.go:177] 
	
	
	==> Docker <==
	Aug 02 17:37:57 functional-775000 dockerd[5830]: time="2024-08-02T17:37:57.177930949Z" level=warning msg="cleaning up after shim disconnected" id=de9db94cfaecf549f00dbcf59d84dbed96ae2bdca04c1b3456dad29bba81180a namespace=moby
	Aug 02 17:37:57 functional-775000 dockerd[5830]: time="2024-08-02T17:37:57.177935241Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 02 17:37:59 functional-775000 dockerd[5830]: time="2024-08-02T17:37:59.132013913Z" level=info msg="shim disconnected" id=c9fb9e01eb6b0a221dc0e98ddb032fd695d9b60a3eaa26c2bb3241ad6bd3f997 namespace=moby
	Aug 02 17:37:59 functional-775000 dockerd[5830]: time="2024-08-02T17:37:59.132045371Z" level=warning msg="cleaning up after shim disconnected" id=c9fb9e01eb6b0a221dc0e98ddb032fd695d9b60a3eaa26c2bb3241ad6bd3f997 namespace=moby
	Aug 02 17:37:59 functional-775000 dockerd[5830]: time="2024-08-02T17:37:59.132049746Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 02 17:37:59 functional-775000 dockerd[5824]: time="2024-08-02T17:37:59.132124621Z" level=info msg="ignoring event" container=c9fb9e01eb6b0a221dc0e98ddb032fd695d9b60a3eaa26c2bb3241ad6bd3f997 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 17:38:01 functional-775000 dockerd[5830]: time="2024-08-02T17:38:01.625431657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 02 17:38:01 functional-775000 dockerd[5830]: time="2024-08-02T17:38:01.625469324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 02 17:38:01 functional-775000 dockerd[5830]: time="2024-08-02T17:38:01.625475532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 02 17:38:01 functional-775000 dockerd[5830]: time="2024-08-02T17:38:01.625511365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 02 17:38:01 functional-775000 dockerd[5830]: time="2024-08-02T17:38:01.650871560Z" level=info msg="shim disconnected" id=c184f0471e8a3c5b55aab42602c894307e475bfe6f94f765c39588c995277a95 namespace=moby
	Aug 02 17:38:01 functional-775000 dockerd[5830]: time="2024-08-02T17:38:01.650915560Z" level=warning msg="cleaning up after shim disconnected" id=c184f0471e8a3c5b55aab42602c894307e475bfe6f94f765c39588c995277a95 namespace=moby
	Aug 02 17:38:01 functional-775000 dockerd[5830]: time="2024-08-02T17:38:01.650920226Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 02 17:38:01 functional-775000 dockerd[5824]: time="2024-08-02T17:38:01.651241099Z" level=info msg="ignoring event" container=c184f0471e8a3c5b55aab42602c894307e475bfe6f94f765c39588c995277a95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 17:38:02 functional-775000 dockerd[5830]: time="2024-08-02T17:38:02.863171373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 02 17:38:02 functional-775000 dockerd[5830]: time="2024-08-02T17:38:02.863213373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 02 17:38:02 functional-775000 dockerd[5830]: time="2024-08-02T17:38:02.863223790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 02 17:38:02 functional-775000 dockerd[5830]: time="2024-08-02T17:38:02.863276289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 02 17:38:02 functional-775000 dockerd[5830]: time="2024-08-02T17:38:02.873031220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 02 17:38:02 functional-775000 dockerd[5830]: time="2024-08-02T17:38:02.879037303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 02 17:38:02 functional-775000 dockerd[5830]: time="2024-08-02T17:38:02.879131427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 02 17:38:02 functional-775000 dockerd[5830]: time="2024-08-02T17:38:02.879194260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 02 17:38:02 functional-775000 cri-dockerd[6078]: time="2024-08-02T17:38:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9ccf804d33c3a16ecb83f61265e0de083579eb0d0850e0a7335dd4569611dab1/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 02 17:38:02 functional-775000 cri-dockerd[6078]: time="2024-08-02T17:38:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/92f95523c81b470bf3cd8e6f58e194e0b9b7b513333a2c214045c847806ff330/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 02 17:38:03 functional-775000 dockerd[5824]: time="2024-08-02T17:38:03.161484025Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c184f0471e8a3       72565bf5bbedf                                                                                         4 seconds ago        Exited              echoserver-arm            3                   a0aab21c9eec8       hello-node-65f5d5cc78-fvrvc
	de9db94cfaecf       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   8 seconds ago        Exited              mount-munger              0                   c9fb9e01eb6b0       busybox-mount
	292ea9feebc2d       nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c                         19 seconds ago       Running             myfrontend                0                   7a6b56899a5a5       sp-pod
	b8ff7e42e55e2       72565bf5bbedf                                                                                         23 seconds ago       Exited              echoserver-arm            2                   76edb185a469a       hello-node-connect-6f49f58cd5-rcrzx
	c9cfdc5e8d446       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         42 seconds ago       Running             nginx                     0                   120014b6e72ca       nginx-svc
	192aa164c6a0f       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   75878204ae706       coredns-7db6d8ff4d-lsp5g
	c71df583aa504       2351f570ed0ea                                                                                         About a minute ago   Running             kube-proxy                2                   bdb41207566f4       kube-proxy-r5tz6
	572780d2a26de       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   23f9d7641e883       storage-provisioner
	9c1967b5ccdbe       d48f992a22722                                                                                         About a minute ago   Running             kube-scheduler            2                   bf9ccbaa7646d       kube-scheduler-functional-775000
	1c2725ec6455d       8e97cdb19e7cc                                                                                         About a minute ago   Running             kube-controller-manager   2                   f72989d35ef50       kube-controller-manager-functional-775000
	4aa0db7f3c7e1       014faa467e297                                                                                         About a minute ago   Running             etcd                      2                   1794f7ea97596       etcd-functional-775000
	69b4d3c6ec004       61773190d42ff                                                                                         About a minute ago   Running             kube-apiserver            0                   797a94d092f0c       kube-apiserver-functional-775000
	159abd7b7e2e7       2437cf7621777                                                                                         About a minute ago   Exited              coredns                   1                   bc8f0ecd29eb4       coredns-7db6d8ff4d-lsp5g
	d8391a6195732       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   c25cdf13b1382       storage-provisioner
	a3b39719502ea       2351f570ed0ea                                                                                         About a minute ago   Exited              kube-proxy                1                   37521ca60f8cd       kube-proxy-r5tz6
	32fb4774cb62f       8e97cdb19e7cc                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   ed0cd21dbe117       kube-controller-manager-functional-775000
	066d8f82642dc       d48f992a22722                                                                                         2 minutes ago        Exited              kube-scheduler            1                   db6efc2d07da3       kube-scheduler-functional-775000
	e6d2a0ff11af2       014faa467e297                                                                                         2 minutes ago        Exited              etcd                      1                   2f1169154a35c       etcd-functional-775000
	
	
	==> coredns [159abd7b7e2e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45705 - 14473 "HINFO IN 8199400376607251289.5803701078308906014. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009939356s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [192aa164c6a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52115 - 9055 "HINFO IN 5572882363863277850.6910450153699115145. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011723364s
	[INFO] 10.244.0.1:22728 - 20031 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000107374s
	[INFO] 10.244.0.1:3847 - 40505 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000096833s
	[INFO] 10.244.0.1:57809 - 27174 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000029375s
	[INFO] 10.244.0.1:25850 - 7593 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001067788s
	[INFO] 10.244.0.1:20527 - 62758 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000124124s
	[INFO] 10.244.0.1:42609 - 36001 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000194374s
	
	
	==> describe nodes <==
	Name:               functional-775000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-775000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=functional-775000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T10_35_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:35:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-775000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:38:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 17:37:51 +0000   Fri, 02 Aug 2024 17:35:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 17:37:51 +0000   Fri, 02 Aug 2024 17:35:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 17:37:51 +0000   Fri, 02 Aug 2024 17:35:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 17:37:51 +0000   Fri, 02 Aug 2024 17:35:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-775000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b18ffb12ed04b5aa03abc7f2ca7383b
	  System UUID:                4b18ffb12ed04b5aa03abc7f2ca7383b
	  Boot ID:                    477cfb90-6262-430d-9379-d00510f93941
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-fvrvc                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  default                     hello-node-connect-6f49f58cd5-rcrzx          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kube-system                 coredns-7db6d8ff4d-lsp5g                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m18s
	  kube-system                 etcd-functional-775000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m32s
	  kube-system                 kube-apiserver-functional-775000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-functional-775000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  kube-system                 kube-proxy-r5tz6                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-scheduler-functional-775000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-lckqv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-lwbsj        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m17s              kube-proxy       
	  Normal  Starting                 74s                kube-proxy       
	  Normal  Starting                 116s               kube-proxy       
	  Normal  NodeHasSufficientMemory  2m32s              kubelet          Node functional-775000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m32s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m32s              kubelet          Node functional-775000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m32s              kubelet          Node functional-775000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m32s              kubelet          Starting kubelet.
	  Normal  NodeReady                2m28s              kubelet          Node functional-775000 status is now: NodeReady
	  Normal  RegisteredNode           2m19s              node-controller  Node functional-775000 event: Registered Node functional-775000 in Controller
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node functional-775000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node functional-775000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m (x7 over 2m)    kubelet          Node functional-775000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           106s               node-controller  Node functional-775000 event: Registered Node functional-775000 in Controller
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node functional-775000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node functional-775000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node functional-775000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           62s                node-controller  Node functional-775000 event: Registered Node functional-775000 in Controller
	
	
	==> dmesg <==
	[  +3.469308] kauditd_printk_skb: 201 callbacks suppressed
	[ +11.271751] kauditd_printk_skb: 29 callbacks suppressed
	[  +3.250274] systemd-fstab-generator[4899]: Ignoring "noauto" option for root device
	[ +10.106729] systemd-fstab-generator[5327]: Ignoring "noauto" option for root device
	[  +0.055192] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.108176] systemd-fstab-generator[5360]: Ignoring "noauto" option for root device
	[  +0.107796] systemd-fstab-generator[5372]: Ignoring "noauto" option for root device
	[  +0.099085] systemd-fstab-generator[5386]: Ignoring "noauto" option for root device
	[  +5.101962] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.319215] systemd-fstab-generator[6031]: Ignoring "noauto" option for root device
	[  +0.086656] systemd-fstab-generator[6043]: Ignoring "noauto" option for root device
	[  +0.084071] systemd-fstab-generator[6055]: Ignoring "noauto" option for root device
	[  +0.105750] systemd-fstab-generator[6070]: Ignoring "noauto" option for root device
	[  +0.234058] systemd-fstab-generator[6237]: Ignoring "noauto" option for root device
	[  +1.095709] systemd-fstab-generator[6361]: Ignoring "noauto" option for root device
	[  +3.400953] kauditd_printk_skb: 199 callbacks suppressed
	[Aug 2 17:37] kauditd_printk_skb: 31 callbacks suppressed
	[  +2.682370] systemd-fstab-generator[7354]: Ignoring "noauto" option for root device
	[  +3.881260] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.351920] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.131811] kauditd_printk_skb: 22 callbacks suppressed
	[ +10.068224] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.486987] kauditd_printk_skb: 38 callbacks suppressed
	[ +17.240032] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.181777] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [4aa0db7f3c7e] <==
	{"level":"info","ts":"2024-08-02T17:36:48.292192Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-02T17:36:48.292211Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-02T17:36:48.292383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-08-02T17:36:48.292438Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-08-02T17:36:48.292509Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-02T17:36:48.292539Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-02T17:36:48.303266Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-02T17:36:48.303446Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-02T17:36:48.303467Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-02T17:36:48.304205Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-02T17:36:48.306397Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-02T17:36:49.783439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-02T17:36:49.783597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-02T17:36:49.783679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-02T17:36:49.783725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-02T17:36:49.783743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-02T17:36:49.78377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-02T17:36:49.783788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-02T17:36:49.789633Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T17:36:49.78963Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-775000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-02T17:36:49.789952Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T17:36:49.790212Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-02T17:36:49.790238Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-02T17:36:49.794181Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-02T17:36:49.794209Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [e6d2a0ff11af] <==
	{"level":"info","ts":"2024-08-02T17:36:05.75935Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-02T17:36:06.854507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-02T17:36:06.85465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-02T17:36:06.854701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-02T17:36:06.854732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-02T17:36:06.854763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-02T17:36:06.855174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-02T17:36:06.855261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-02T17:36:06.860696Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-775000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-02T17:36:06.860971Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T17:36:06.861523Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-02T17:36:06.861581Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-02T17:36:06.861624Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T17:36:06.86503Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-02T17:36:06.865089Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-02T17:36:33.539001Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-02T17:36:33.539026Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-775000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-02T17:36:33.539064Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T17:36:33.539108Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T17:36:33.550424Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T17:36:33.550453Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-02T17:36:33.550477Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-02T17:36:33.553086Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-02T17:36:33.553132Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-02T17:36:33.55314Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-775000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 17:38:05 up 2 min,  0 users,  load average: 0.24, 0.20, 0.09
	Linux functional-775000 5.10.207 #1 SMP PREEMPT Wed Jul 31 12:01:14 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [69b4d3c6ec00] <==
	I0802 17:36:50.427376       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0802 17:36:50.427400       1 aggregator.go:165] initial CRD sync complete...
	I0802 17:36:50.427407       1 autoregister_controller.go:141] Starting autoregister controller
	I0802 17:36:50.427432       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0802 17:36:50.427435       1 cache.go:39] Caches are synced for autoregister controller
	I0802 17:36:50.439309       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0802 17:36:50.439348       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0802 17:36:50.439353       1 policy_source.go:224] refreshing policies
	I0802 17:36:50.473667       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 17:36:51.321167       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0802 17:36:51.661864       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0802 17:36:51.665564       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0802 17:36:51.676695       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0802 17:36:51.683986       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0802 17:36:51.688669       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0802 17:37:03.247181       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0802 17:37:03.264344       1 controller.go:615] quota admission added evaluator for: endpoints
	I0802 17:37:09.810038       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.167.15"}
	I0802 17:37:15.119680       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0802 17:37:15.161591       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.204.180"}
	I0802 17:37:19.966221       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.27.24"}
	I0802 17:37:30.361208       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.96.71"}
	I0802 17:38:02.478079       1 controller.go:615] quota admission added evaluator for: namespaces
	I0802 17:38:02.568079       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.182.6"}
	I0802 17:38:02.594771       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.150.236"}
	
	
	==> kube-controller-manager [1c2725ec6455] <==
	I0802 17:37:30.877955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="24.458µs"
	I0802 17:37:31.887610       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="27.25µs"
	I0802 17:37:33.907874       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="23.75µs"
	I0802 17:37:42.592547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="25.042µs"
	I0802 17:37:42.955601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="27.792µs"
	I0802 17:37:49.591191       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="32.666µs"
	I0802 17:37:54.588879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="25.208µs"
	I0802 17:38:02.075703       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="30.458µs"
	I0802 17:38:02.503910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="6.880076ms"
	E0802 17:38:02.503930       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 17:38:02.512895       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="8.949145ms"
	E0802 17:38:02.512924       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 17:38:02.517495       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="4.558468ms"
	E0802 17:38:02.517513       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 17:38:02.517675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="10.003679ms"
	E0802 17:38:02.517684       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 17:38:02.521483       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="2.909354ms"
	E0802 17:38:02.521562       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0802 17:38:02.533710       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="9.984263ms"
	I0802 17:38:02.536094       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="12.130789ms"
	I0802 17:38:02.541930       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="5.790292ms"
	I0802 17:38:02.550525       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="16.788423ms"
	I0802 17:38:02.550553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="11.791µs"
	I0802 17:38:02.554521       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="12.566952ms"
	I0802 17:38:02.554596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="59.291µs"
	
	
	==> kube-controller-manager [32fb4774cb62] <==
	I0802 17:36:19.709113       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0802 17:36:19.709144       1 shared_informer.go:320] Caches are synced for PV protection
	I0802 17:36:19.709777       1 shared_informer.go:320] Caches are synced for endpoint
	I0802 17:36:19.709843       1 shared_informer.go:320] Caches are synced for crt configmap
	I0802 17:36:19.710457       1 shared_informer.go:320] Caches are synced for GC
	I0802 17:36:19.713232       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0802 17:36:19.720789       1 shared_informer.go:320] Caches are synced for job
	I0802 17:36:19.724011       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0802 17:36:19.727548       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0802 17:36:19.727564       1 shared_informer.go:320] Caches are synced for ephemeral
	I0802 17:36:19.730544       1 shared_informer.go:320] Caches are synced for namespace
	I0802 17:36:19.811842       1 shared_informer.go:320] Caches are synced for stateful set
	I0802 17:36:19.823285       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0802 17:36:19.899914       1 shared_informer.go:320] Caches are synced for daemon sets
	I0802 17:36:19.903446       1 shared_informer.go:320] Caches are synced for deployment
	I0802 17:36:19.907445       1 shared_informer.go:320] Caches are synced for disruption
	I0802 17:36:19.908661       1 shared_informer.go:320] Caches are synced for taint
	I0802 17:36:19.908739       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0802 17:36:19.908785       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-775000"
	I0802 17:36:19.908833       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0802 17:36:19.918996       1 shared_informer.go:320] Caches are synced for resource quota
	I0802 17:36:19.932133       1 shared_informer.go:320] Caches are synced for resource quota
	I0802 17:36:20.336461       1 shared_informer.go:320] Caches are synced for garbage collector
	I0802 17:36:20.405495       1 shared_informer.go:320] Caches are synced for garbage collector
	I0802 17:36:20.405552       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [a3b39719502e] <==
	I0802 17:36:08.657508       1 server_linux.go:69] "Using iptables proxy"
	I0802 17:36:08.687134       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0802 17:36:08.701863       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 17:36:08.701881       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 17:36:08.701891       1 server_linux.go:165] "Using iptables Proxier"
	I0802 17:36:08.703017       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 17:36:08.703123       1 server.go:872] "Version info" version="v1.30.3"
	I0802 17:36:08.703149       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 17:36:08.703699       1 config.go:192] "Starting service config controller"
	I0802 17:36:08.703730       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 17:36:08.703754       1 config.go:101] "Starting endpoint slice config controller"
	I0802 17:36:08.703771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 17:36:08.703997       1 config.go:319] "Starting node config controller"
	I0802 17:36:08.704023       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 17:36:08.804516       1 shared_informer.go:320] Caches are synced for node config
	I0802 17:36:08.804550       1 shared_informer.go:320] Caches are synced for service config
	I0802 17:36:08.804561       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [c71df583aa50] <==
	I0802 17:36:51.105717       1 server_linux.go:69] "Using iptables proxy"
	I0802 17:36:51.109407       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0802 17:36:51.117057       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 17:36:51.117072       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 17:36:51.117078       1 server_linux.go:165] "Using iptables Proxier"
	I0802 17:36:51.117710       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 17:36:51.117770       1 server.go:872] "Version info" version="v1.30.3"
	I0802 17:36:51.117777       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 17:36:51.118126       1 config.go:192] "Starting service config controller"
	I0802 17:36:51.118136       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 17:36:51.118144       1 config.go:101] "Starting endpoint slice config controller"
	I0802 17:36:51.118147       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 17:36:51.118338       1 config.go:319] "Starting node config controller"
	I0802 17:36:51.118341       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 17:36:51.218721       1 shared_informer.go:320] Caches are synced for node config
	I0802 17:36:51.218730       1 shared_informer.go:320] Caches are synced for service config
	I0802 17:36:51.218766       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [066d8f82642d] <==
	I0802 17:36:06.209199       1 serving.go:380] Generated self-signed cert in-memory
	W0802 17:36:07.395553       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0802 17:36:07.395630       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 17:36:07.395662       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0802 17:36:07.395683       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0802 17:36:07.435164       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0802 17:36:07.435202       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 17:36:07.435921       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0802 17:36:07.435979       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0802 17:36:07.435993       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 17:36:07.436004       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0802 17:36:07.536636       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 17:36:33.564808       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0802 17:36:33.564929       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0802 17:36:33.564981       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [9c1967b5ccdb] <==
	I0802 17:36:48.746171       1 serving.go:380] Generated self-signed cert in-memory
	W0802 17:36:50.343672       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0802 17:36:50.343769       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 17:36:50.343791       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0802 17:36:50.343808       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0802 17:36:50.377189       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0802 17:36:50.377294       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 17:36:50.378069       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0802 17:36:50.378103       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 17:36:50.378471       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0802 17:36:50.378551       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0802 17:36:50.478723       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 02 17:37:49 functional-775000 kubelet[6368]: E0802 17:37:49.585037    6368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-fvrvc_default(7cecc339-b4bc-49bb-941d-1bf84f6767be)\"" pod="default/hello-node-65f5d5cc78-fvrvc" podUID="7cecc339-b4bc-49bb-941d-1bf84f6767be"
	Aug 02 17:37:53 functional-775000 kubelet[6368]: I0802 17:37:53.753563    6368 topology_manager.go:215] "Topology Admit Handler" podUID="a1dfdf11-be25-4873-897b-b0fb878bc087" podNamespace="default" podName="busybox-mount"
	Aug 02 17:37:53 functional-775000 kubelet[6368]: I0802 17:37:53.930470    6368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/a1dfdf11-be25-4873-897b-b0fb878bc087-test-volume\") pod \"busybox-mount\" (UID: \"a1dfdf11-be25-4873-897b-b0fb878bc087\") " pod="default/busybox-mount"
	Aug 02 17:37:53 functional-775000 kubelet[6368]: I0802 17:37:53.930495    6368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rtfj\" (UniqueName: \"kubernetes.io/projected/a1dfdf11-be25-4873-897b-b0fb878bc087-kube-api-access-5rtfj\") pod \"busybox-mount\" (UID: \"a1dfdf11-be25-4873-897b-b0fb878bc087\") " pod="default/busybox-mount"
	Aug 02 17:37:54 functional-775000 kubelet[6368]: I0802 17:37:54.584426    6368 scope.go:117] "RemoveContainer" containerID="b8ff7e42e55e21d5595e421496a3c8e393869b4c2397511b8a87002db4db2dd1"
	Aug 02 17:37:54 functional-775000 kubelet[6368]: E0802 17:37:54.584509    6368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-rcrzx_default(06fb08ca-3d7c-4650-b35f-ab095891c331)\"" pod="default/hello-node-connect-6f49f58cd5-rcrzx" podUID="06fb08ca-3d7c-4650-b35f-ab095891c331"
	Aug 02 17:37:59 functional-775000 kubelet[6368]: I0802 17:37:59.166769    6368 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rtfj\" (UniqueName: \"kubernetes.io/projected/a1dfdf11-be25-4873-897b-b0fb878bc087-kube-api-access-5rtfj\") pod \"a1dfdf11-be25-4873-897b-b0fb878bc087\" (UID: \"a1dfdf11-be25-4873-897b-b0fb878bc087\") "
	Aug 02 17:37:59 functional-775000 kubelet[6368]: I0802 17:37:59.166809    6368 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/a1dfdf11-be25-4873-897b-b0fb878bc087-test-volume\") pod \"a1dfdf11-be25-4873-897b-b0fb878bc087\" (UID: \"a1dfdf11-be25-4873-897b-b0fb878bc087\") "
	Aug 02 17:37:59 functional-775000 kubelet[6368]: I0802 17:37:59.166849    6368 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1dfdf11-be25-4873-897b-b0fb878bc087-test-volume" (OuterVolumeSpecName: "test-volume") pod "a1dfdf11-be25-4873-897b-b0fb878bc087" (UID: "a1dfdf11-be25-4873-897b-b0fb878bc087"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 02 17:37:59 functional-775000 kubelet[6368]: I0802 17:37:59.167935    6368 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1dfdf11-be25-4873-897b-b0fb878bc087-kube-api-access-5rtfj" (OuterVolumeSpecName: "kube-api-access-5rtfj") pod "a1dfdf11-be25-4873-897b-b0fb878bc087" (UID: "a1dfdf11-be25-4873-897b-b0fb878bc087"). InnerVolumeSpecName "kube-api-access-5rtfj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 02 17:37:59 functional-775000 kubelet[6368]: I0802 17:37:59.267871    6368 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/a1dfdf11-be25-4873-897b-b0fb878bc087-test-volume\") on node \"functional-775000\" DevicePath \"\""
	Aug 02 17:37:59 functional-775000 kubelet[6368]: I0802 17:37:59.267884    6368 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5rtfj\" (UniqueName: \"kubernetes.io/projected/a1dfdf11-be25-4873-897b-b0fb878bc087-kube-api-access-5rtfj\") on node \"functional-775000\" DevicePath \"\""
	Aug 02 17:38:00 functional-775000 kubelet[6368]: I0802 17:38:00.049120    6368 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9fb9e01eb6b0a221dc0e98ddb032fd695d9b60a3eaa26c2bb3241ad6bd3f997"
	Aug 02 17:38:01 functional-775000 kubelet[6368]: I0802 17:38:01.584937    6368 scope.go:117] "RemoveContainer" containerID="471768ec0a9673f9339b8e16bd098ef6aef0caab33c1ea61770150457d96075f"
	Aug 02 17:38:02 functional-775000 kubelet[6368]: I0802 17:38:02.061146    6368 scope.go:117] "RemoveContainer" containerID="471768ec0a9673f9339b8e16bd098ef6aef0caab33c1ea61770150457d96075f"
	Aug 02 17:38:02 functional-775000 kubelet[6368]: I0802 17:38:02.061423    6368 scope.go:117] "RemoveContainer" containerID="c184f0471e8a3c5b55aab42602c894307e475bfe6f94f765c39588c995277a95"
	Aug 02 17:38:02 functional-775000 kubelet[6368]: E0802 17:38:02.061550    6368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-fvrvc_default(7cecc339-b4bc-49bb-941d-1bf84f6767be)\"" pod="default/hello-node-65f5d5cc78-fvrvc" podUID="7cecc339-b4bc-49bb-941d-1bf84f6767be"
	Aug 02 17:38:02 functional-775000 kubelet[6368]: I0802 17:38:02.530552    6368 topology_manager.go:215] "Topology Admit Handler" podUID="74dcf3e2-5a2c-4a51-bfa4-eedc97d52496" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-lwbsj"
	Aug 02 17:38:02 functional-775000 kubelet[6368]: E0802 17:38:02.530589    6368 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a1dfdf11-be25-4873-897b-b0fb878bc087" containerName="mount-munger"
	Aug 02 17:38:02 functional-775000 kubelet[6368]: I0802 17:38:02.530621    6368 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1dfdf11-be25-4873-897b-b0fb878bc087" containerName="mount-munger"
	Aug 02 17:38:02 functional-775000 kubelet[6368]: I0802 17:38:02.531114    6368 topology_manager.go:215] "Topology Admit Handler" podUID="c018e817-948d-42c8-a5c1-a49efb0d3225" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-lckqv"
	Aug 02 17:38:02 functional-775000 kubelet[6368]: I0802 17:38:02.685127    6368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c018e817-948d-42c8-a5c1-a49efb0d3225-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-lckqv\" (UID: \"c018e817-948d-42c8-a5c1-a49efb0d3225\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-lckqv"
	Aug 02 17:38:02 functional-775000 kubelet[6368]: I0802 17:38:02.685166    6368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/74dcf3e2-5a2c-4a51-bfa4-eedc97d52496-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-lwbsj\" (UID: \"74dcf3e2-5a2c-4a51-bfa4-eedc97d52496\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-lwbsj"
	Aug 02 17:38:02 functional-775000 kubelet[6368]: I0802 17:38:02.685175    6368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t58w5\" (UniqueName: \"kubernetes.io/projected/c018e817-948d-42c8-a5c1-a49efb0d3225-kube-api-access-t58w5\") pod \"dashboard-metrics-scraper-b5fc48f67-lckqv\" (UID: \"c018e817-948d-42c8-a5c1-a49efb0d3225\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-lckqv"
	Aug 02 17:38:02 functional-775000 kubelet[6368]: I0802 17:38:02.685185    6368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pzzh\" (UniqueName: \"kubernetes.io/projected/74dcf3e2-5a2c-4a51-bfa4-eedc97d52496-kube-api-access-6pzzh\") pod \"kubernetes-dashboard-779776cb65-lwbsj\" (UID: \"74dcf3e2-5a2c-4a51-bfa4-eedc97d52496\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-lwbsj"
	
	
	==> storage-provisioner [572780d2a26d] <==
	I0802 17:36:51.062686       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 17:36:51.078697       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 17:36:51.078717       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 17:37:08.472634       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 17:37:08.472890       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0dacf9da-db9c-4fc7-957f-d702ab63323b", APIVersion:"v1", ResourceVersion:"574", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-775000_ffb691c6-b1db-4ab0-bd05-f824f92d2f93 became leader
	I0802 17:37:08.472974       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-775000_ffb691c6-b1db-4ab0-bd05-f824f92d2f93!
	I0802 17:37:08.573289       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-775000_ffb691c6-b1db-4ab0-bd05-f824f92d2f93!
	I0802 17:37:31.735035       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0802 17:37:31.735278       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"904c2e3f-fcac-48b3-8a40-c7a041dcebe1", APIVersion:"v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0802 17:37:31.736274       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    9e701e22-41a5-4381-b94c-d48d04b20bee 321 0 2024-08-02 17:35:47 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-02 17:35:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-904c2e3f-fcac-48b3-8a40-c7a041dcebe1 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  904c2e3f-fcac-48b3-8a40-c7a041dcebe1 705 0 2024-08-02 17:37:31 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-02 17:37:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-02 17:37:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0802 17:37:31.736564       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-904c2e3f-fcac-48b3-8a40-c7a041dcebe1" provisioned
	I0802 17:37:31.736578       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0802 17:37:31.736581       1 volume_store.go:212] Trying to save persistentvolume "pvc-904c2e3f-fcac-48b3-8a40-c7a041dcebe1"
	I0802 17:37:31.747347       1 volume_store.go:219] persistentvolume "pvc-904c2e3f-fcac-48b3-8a40-c7a041dcebe1" saved
	I0802 17:37:31.757907       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"904c2e3f-fcac-48b3-8a40-c7a041dcebe1", APIVersion:"v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-904c2e3f-fcac-48b3-8a40-c7a041dcebe1
	
	
	==> storage-provisioner [d8391a619573] <==
	I0802 17:36:08.616520       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 17:36:08.625052       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 17:36:08.625113       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 17:36:26.010950       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 17:36:26.011091       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-775000_f75b09f4-8a62-4d76-a7bb-e60bae7c1f45!
	I0802 17:36:26.011228       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0dacf9da-db9c-4fc7-957f-d702ab63323b", APIVersion:"v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-775000_f75b09f4-8a62-4d76-a7bb-e60bae7c1f45 became leader
	I0802 17:36:26.111941       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-775000_f75b09f4-8a62-4d76-a7bb-e60bae7c1f45!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-775000 -n functional-775000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-775000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-b5fc48f67-lckqv kubernetes-dashboard-779776cb65-lwbsj
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-775000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-lckqv kubernetes-dashboard-779776cb65-lwbsj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-775000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-lckqv kubernetes-dashboard-779776cb65-lwbsj: exit status 1 (43.402917ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-775000/192.168.105.4
	Start Time:       Fri, 02 Aug 2024 10:37:53 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://de9db94cfaecf549f00dbcf59d84dbed96ae2bdca04c1b3456dad29bba81180a
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 02 Aug 2024 10:37:57 -0700
	      Finished:     Fri, 02 Aug 2024 10:37:57 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rtfj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-5rtfj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  13s   default-scheduler  Successfully assigned default/busybox-mount to functional-775000
	  Normal  Pulling    12s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.968s (2.968s including waiting). Image size: 3547125 bytes.
	  Normal  Created    9s    kubelet            Created container mount-munger
	  Normal  Started    9s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-b5fc48f67-lckqv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-lwbsj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-775000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-lckqv kubernetes-dashboard-779776cb65-lwbsj: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (35.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 node stop m02 -v=7 --alsologtostderr
E0802 10:42:35.611190    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-982000 node stop m02 -v=7 --alsologtostderr: (12.192341667s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr
E0802 10:42:56.092997    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
E0802 10:43:37.053428    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
E0802 10:44:58.973457    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
E0802 10:45:32.893943    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr: exit status 7 (2m55.968501s)

                                                
                                                
-- stdout --
	ha-982000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-982000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-982000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-982000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 10:42:40.969797    3152 out.go:291] Setting OutFile to fd 1 ...
	I0802 10:42:40.969953    3152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:42:40.969958    3152 out.go:304] Setting ErrFile to fd 2...
	I0802 10:42:40.969960    3152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:42:40.970107    3152 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 10:42:40.970236    3152 out.go:298] Setting JSON to false
	I0802 10:42:40.970252    3152 mustload.go:65] Loading cluster: ha-982000
	I0802 10:42:40.970307    3152 notify.go:220] Checking for updates...
	I0802 10:42:40.970491    3152 config.go:182] Loaded profile config "ha-982000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 10:42:40.970500    3152 status.go:255] checking status of ha-982000 ...
	I0802 10:42:40.971406    3152 status.go:330] ha-982000 host status = "Running" (err=<nil>)
	I0802 10:42:40.971415    3152 host.go:66] Checking if "ha-982000" exists ...
	I0802 10:42:40.971520    3152 host.go:66] Checking if "ha-982000" exists ...
	I0802 10:42:40.971641    3152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 10:42:40.971651    3152 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/id_rsa Username:docker}
	W0802 10:43:06.896568    3152 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0802 10:43:06.896653    3152 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0802 10:43:06.896662    3152 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0802 10:43:06.896667    3152 status.go:257] ha-982000 status: &{Name:ha-982000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0802 10:43:06.896683    3152 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0802 10:43:06.896690    3152 status.go:255] checking status of ha-982000-m02 ...
	I0802 10:43:06.896903    3152 status.go:330] ha-982000-m02 host status = "Stopped" (err=<nil>)
	I0802 10:43:06.896909    3152 status.go:343] host is not running, skipping remaining checks
	I0802 10:43:06.896911    3152 status.go:257] ha-982000-m02 status: &{Name:ha-982000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 10:43:06.896915    3152 status.go:255] checking status of ha-982000-m03 ...
	I0802 10:43:06.897743    3152 status.go:330] ha-982000-m03 host status = "Running" (err=<nil>)
	I0802 10:43:06.897759    3152 host.go:66] Checking if "ha-982000-m03" exists ...
	I0802 10:43:06.898061    3152 host.go:66] Checking if "ha-982000-m03" exists ...
	I0802 10:43:06.898382    3152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 10:43:06.898397    3152 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m03/id_rsa Username:docker}
	W0802 10:44:21.898746    3152 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0802 10:44:21.898790    3152 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0802 10:44:21.898797    3152 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0802 10:44:21.898801    3152 status.go:257] ha-982000-m03 status: &{Name:ha-982000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0802 10:44:21.898809    3152 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0802 10:44:21.898814    3152 status.go:255] checking status of ha-982000-m04 ...
	I0802 10:44:21.899574    3152 status.go:330] ha-982000-m04 host status = "Running" (err=<nil>)
	I0802 10:44:21.899585    3152 host.go:66] Checking if "ha-982000-m04" exists ...
	I0802 10:44:21.899706    3152 host.go:66] Checking if "ha-982000-m04" exists ...
	I0802 10:44:21.899817    3152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 10:44:21.899823    3152 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m04/id_rsa Username:docker}
	W0802 10:45:36.900273    3152 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0802 10:45:36.900323    3152 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0802 10:45:36.900334    3152 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0802 10:45:36.900339    3152 status.go:257] ha-982000-m04 status: &{Name:ha-982000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0802 10:45:36.900348    3152 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr": ha-982000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-982000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-982000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-982000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr": ha-982000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-982000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-982000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-982000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr": ha-982000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-982000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-982000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-982000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000: exit status 3 (25.960865875s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 10:46:02.860828    3178 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0802 10:46:02.860836    3178 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-982000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0802 10:47:15.054056    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.031104959s)
ha_test.go:413: expected profile "ha-982000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-982000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-982000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-982000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000
E0802 10:47:42.756887    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000: exit status 3 (25.965322458s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 10:47:46.795515    3202 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0802 10:47:46.795572    3202 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-982000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (183.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-982000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.131343167s)

                                                
                                                
-- stdout --
	* Starting "ha-982000-m02" control-plane node in "ha-982000" cluster
	* Restarting existing qemu2 VM for "ha-982000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-982000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 10:47:46.869080    3210 out.go:291] Setting OutFile to fd 1 ...
	I0802 10:47:46.869398    3210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:47:46.869403    3210 out.go:304] Setting ErrFile to fd 2...
	I0802 10:47:46.869406    3210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:47:46.869573    3210 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 10:47:46.869868    3210 mustload.go:65] Loading cluster: ha-982000
	I0802 10:47:46.870189    3210 config.go:182] Loaded profile config "ha-982000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0802 10:47:46.870531    3210 host.go:58] "ha-982000-m02" host status: Stopped
	I0802 10:47:46.875009    3210 out.go:177] * Starting "ha-982000-m02" control-plane node in "ha-982000" cluster
	I0802 10:47:46.877858    3210 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 10:47:46.877877    3210 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 10:47:46.877889    3210 cache.go:56] Caching tarball of preloaded images
	I0802 10:47:46.877971    3210 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 10:47:46.877979    3210 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 10:47:46.878049    3210 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/ha-982000/config.json ...
	I0802 10:47:46.878495    3210 start.go:360] acquireMachinesLock for ha-982000-m02: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 10:47:46.878552    3210 start.go:364] duration metric: took 39.5µs to acquireMachinesLock for "ha-982000-m02"
	I0802 10:47:46.878562    3210 start.go:96] Skipping create...Using existing machine configuration
	I0802 10:47:46.878572    3210 fix.go:54] fixHost starting: m02
	I0802 10:47:46.878749    3210 fix.go:112] recreateIfNeeded on ha-982000-m02: state=Stopped err=<nil>
	W0802 10:47:46.878757    3210 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 10:47:46.881884    3210 out.go:177] * Restarting existing qemu2 VM for "ha-982000-m02" ...
	I0802 10:47:46.885771    3210 qemu.go:418] Using hvf for hardware acceleration
	I0802 10:47:46.885852    3210 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:13:bd:f9:01:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m02/disk.qcow2
	I0802 10:47:46.889165    3210 main.go:141] libmachine: STDOUT: 
	I0802 10:47:46.889201    3210 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 10:47:46.889240    3210 fix.go:56] duration metric: took 10.672084ms for fixHost
	I0802 10:47:46.889246    3210 start.go:83] releasing machines lock for "ha-982000-m02", held for 10.688459ms
	W0802 10:47:46.889258    3210 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 10:47:46.889310    3210 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 10:47:46.889314    3210 start.go:729] Will try again in 5 seconds ...
	I0802 10:47:51.890105    3210 start.go:360] acquireMachinesLock for ha-982000-m02: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 10:47:51.890595    3210 start.go:364] duration metric: took 411.875µs to acquireMachinesLock for "ha-982000-m02"
	I0802 10:47:51.890744    3210 start.go:96] Skipping create...Using existing machine configuration
	I0802 10:47:51.890765    3210 fix.go:54] fixHost starting: m02
	I0802 10:47:51.891560    3210 fix.go:112] recreateIfNeeded on ha-982000-m02: state=Stopped err=<nil>
	W0802 10:47:51.891587    3210 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 10:47:51.895580    3210 out.go:177] * Restarting existing qemu2 VM for "ha-982000-m02" ...
	I0802 10:47:51.899627    3210 qemu.go:418] Using hvf for hardware acceleration
	I0802 10:47:51.899851    3210 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:13:bd:f9:01:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m02/disk.qcow2
	I0802 10:47:51.909148    3210 main.go:141] libmachine: STDOUT: 
	I0802 10:47:51.909206    3210 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 10:47:51.909274    3210 fix.go:56] duration metric: took 18.513542ms for fixHost
	I0802 10:47:51.909293    3210 start.go:83] releasing machines lock for "ha-982000-m02", held for 18.675ms
	W0802 10:47:51.909475    3210 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-982000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-982000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 10:47:51.912583    3210 out.go:177] 
	W0802 10:47:51.916710    3210 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 10:47:51.916737    3210 out.go:239] * 
	* 
	W0802 10:47:51.922387    3210 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 10:47:51.926660    3210 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0802 10:47:46.869080    3210 out.go:291] Setting OutFile to fd 1 ...
I0802 10:47:46.869398    3210 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 10:47:46.869403    3210 out.go:304] Setting ErrFile to fd 2...
I0802 10:47:46.869406    3210 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 10:47:46.869573    3210 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
I0802 10:47:46.869868    3210 mustload.go:65] Loading cluster: ha-982000
I0802 10:47:46.870189    3210 config.go:182] Loaded profile config "ha-982000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0802 10:47:46.870531    3210 host.go:58] "ha-982000-m02" host status: Stopped
I0802 10:47:46.875009    3210 out.go:177] * Starting "ha-982000-m02" control-plane node in "ha-982000" cluster
I0802 10:47:46.877858    3210 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0802 10:47:46.877877    3210 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0802 10:47:46.877889    3210 cache.go:56] Caching tarball of preloaded images
I0802 10:47:46.877971    3210 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0802 10:47:46.877979    3210 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0802 10:47:46.878049    3210 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/ha-982000/config.json ...
I0802 10:47:46.878495    3210 start.go:360] acquireMachinesLock for ha-982000-m02: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0802 10:47:46.878552    3210 start.go:364] duration metric: took 39.5µs to acquireMachinesLock for "ha-982000-m02"
I0802 10:47:46.878562    3210 start.go:96] Skipping create...Using existing machine configuration
I0802 10:47:46.878572    3210 fix.go:54] fixHost starting: m02
I0802 10:47:46.878749    3210 fix.go:112] recreateIfNeeded on ha-982000-m02: state=Stopped err=<nil>
W0802 10:47:46.878757    3210 fix.go:138] unexpected machine state, will restart: <nil>
I0802 10:47:46.881884    3210 out.go:177] * Restarting existing qemu2 VM for "ha-982000-m02" ...
I0802 10:47:46.885771    3210 qemu.go:418] Using hvf for hardware acceleration
I0802 10:47:46.885852    3210 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:13:bd:f9:01:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m02/disk.qcow2
I0802 10:47:46.889165    3210 main.go:141] libmachine: STDOUT: 
I0802 10:47:46.889201    3210 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0802 10:47:46.889240    3210 fix.go:56] duration metric: took 10.672084ms for fixHost
I0802 10:47:46.889246    3210 start.go:83] releasing machines lock for "ha-982000-m02", held for 10.688459ms
W0802 10:47:46.889258    3210 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0802 10:47:46.889310    3210 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0802 10:47:46.889314    3210 start.go:729] Will try again in 5 seconds ...
I0802 10:47:51.890105    3210 start.go:360] acquireMachinesLock for ha-982000-m02: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0802 10:47:51.890595    3210 start.go:364] duration metric: took 411.875µs to acquireMachinesLock for "ha-982000-m02"
I0802 10:47:51.890744    3210 start.go:96] Skipping create...Using existing machine configuration
I0802 10:47:51.890765    3210 fix.go:54] fixHost starting: m02
I0802 10:47:51.891560    3210 fix.go:112] recreateIfNeeded on ha-982000-m02: state=Stopped err=<nil>
W0802 10:47:51.891587    3210 fix.go:138] unexpected machine state, will restart: <nil>
I0802 10:47:51.895580    3210 out.go:177] * Restarting existing qemu2 VM for "ha-982000-m02" ...
I0802 10:47:51.899627    3210 qemu.go:418] Using hvf for hardware acceleration
I0802 10:47:51.899851    3210 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:13:bd:f9:01:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m02/disk.qcow2
I0802 10:47:51.909148    3210 main.go:141] libmachine: STDOUT: 
I0802 10:47:51.909206    3210 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0802 10:47:51.909274    3210 fix.go:56] duration metric: took 18.513542ms for fixHost
I0802 10:47:51.909293    3210 start.go:83] releasing machines lock for "ha-982000-m02", held for 18.675ms
W0802 10:47:51.909475    3210 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-982000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-982000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0802 10:47:51.912583    3210 out.go:177] 
W0802 10:47:51.916710    3210 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0802 10:47:51.916737    3210 out.go:239] * 
* 
W0802 10:47:51.922387    3210 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0802 10:47:51.926660    3210 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-982000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr: exit status 7 (2m32.65908225s)

                                                
                                                
-- stdout --
	ha-982000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-982000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-982000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-982000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 10:47:51.984420    3214 out.go:291] Setting OutFile to fd 1 ...
	I0802 10:47:51.984578    3214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:47:51.984582    3214 out.go:304] Setting ErrFile to fd 2...
	I0802 10:47:51.984585    3214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:47:51.984757    3214 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 10:47:51.984911    3214 out.go:298] Setting JSON to false
	I0802 10:47:51.984923    3214 mustload.go:65] Loading cluster: ha-982000
	I0802 10:47:51.984954    3214 notify.go:220] Checking for updates...
	I0802 10:47:51.985206    3214 config.go:182] Loaded profile config "ha-982000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 10:47:51.985214    3214 status.go:255] checking status of ha-982000 ...
	I0802 10:47:51.986051    3214 status.go:330] ha-982000 host status = "Running" (err=<nil>)
	I0802 10:47:51.986059    3214 host.go:66] Checking if "ha-982000" exists ...
	I0802 10:47:51.986188    3214 host.go:66] Checking if "ha-982000" exists ...
	I0802 10:47:51.986321    3214 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 10:47:51.986330    3214 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/id_rsa Username:docker}
	W0802 10:47:51.986521    3214 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0802 10:47:51.986540    3214 retry.go:31] will retry after 146.129208ms: dial tcp 192.168.105.5:22: connect: host is down
	W0802 10:47:52.134903    3214 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0802 10:47:52.134931    3214 retry.go:31] will retry after 518.774686ms: dial tcp 192.168.105.5:22: connect: host is down
	W0802 10:47:52.656195    3214 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0802 10:47:52.656256    3214 retry.go:31] will retry after 530.729732ms: dial tcp 192.168.105.5:22: connect: host is down
	W0802 10:47:53.189603    3214 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0802 10:47:53.189808    3214 retry.go:31] will retry after 130.766903ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0802 10:47:53.322651    3214 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/id_rsa Username:docker}
	W0802 10:47:53.323835    3214 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0802 10:47:53.323879    3214 retry.go:31] will retry after 246.118924ms: dial tcp 192.168.105.5:22: connect: host is down
	W0802 10:47:53.572500    3214 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0802 10:47:53.572528    3214 retry.go:31] will retry after 448.523033ms: dial tcp 192.168.105.5:22: connect: host is down
	W0802 10:47:54.023253    3214 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0802 10:47:54.023277    3214 retry.go:31] will retry after 554.387161ms: dial tcp 192.168.105.5:22: connect: host is down
	W0802 10:47:54.579798    3214 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	W0802 10:47:54.579842    3214 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	E0802 10:47:54.579867    3214 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0802 10:47:54.579871    3214 status.go:257] ha-982000 status: &{Name:ha-982000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0802 10:47:54.579879    3214 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0802 10:47:54.579884    3214 status.go:255] checking status of ha-982000-m02 ...
	I0802 10:47:54.580066    3214 status.go:330] ha-982000-m02 host status = "Stopped" (err=<nil>)
	I0802 10:47:54.580070    3214 status.go:343] host is not running, skipping remaining checks
	I0802 10:47:54.580073    3214 status.go:257] ha-982000-m02 status: &{Name:ha-982000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 10:47:54.580077    3214 status.go:255] checking status of ha-982000-m03 ...
	I0802 10:47:54.580635    3214 status.go:330] ha-982000-m03 host status = "Running" (err=<nil>)
	I0802 10:47:54.580642    3214 host.go:66] Checking if "ha-982000-m03" exists ...
	I0802 10:47:54.580748    3214 host.go:66] Checking if "ha-982000-m03" exists ...
	I0802 10:47:54.580873    3214 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 10:47:54.580878    3214 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m03/id_rsa Username:docker}
	W0802 10:49:09.580605    3214 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0802 10:49:09.580801    3214 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0802 10:49:09.580840    3214 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0802 10:49:09.580859    3214 status.go:257] ha-982000-m03 status: &{Name:ha-982000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0802 10:49:09.580906    3214 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0802 10:49:09.580980    3214 status.go:255] checking status of ha-982000-m04 ...
	I0802 10:49:09.584104    3214 status.go:330] ha-982000-m04 host status = "Running" (err=<nil>)
	I0802 10:49:09.584132    3214 host.go:66] Checking if "ha-982000-m04" exists ...
	I0802 10:49:09.584608    3214 host.go:66] Checking if "ha-982000-m04" exists ...
	I0802 10:49:09.585168    3214 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 10:49:09.585197    3214 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000-m04/id_rsa Username:docker}
	W0802 10:50:24.585630    3214 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0802 10:50:24.585677    3214 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0802 10:50:24.585685    3214 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0802 10:50:24.585690    3214 status.go:257] ha-982000-m04 status: &{Name:ha-982000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0802 10:50:24.585699    3214 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000
E0802 10:50:32.829904    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000: exit status 3 (25.960183458s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 10:50:50.545213    3242 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0802 10:50:50.545228    3242 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-982000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (183.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-982000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-982000 -v=7 --alsologtostderr
E0802 10:52:15.042722    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
E0802 10:55:32.819623    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-982000 -v=7 --alsologtostderr: (3m49.013486375s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-982000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-982000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.227287708s)

                                                
                                                
-- stdout --
	* [ha-982000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-982000" primary control-plane node in "ha-982000" cluster
	* Restarting existing qemu2 VM for "ha-982000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-982000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 10:55:58.198662    3308 out.go:291] Setting OutFile to fd 1 ...
	I0802 10:55:58.198844    3308 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:55:58.198849    3308 out.go:304] Setting ErrFile to fd 2...
	I0802 10:55:58.198852    3308 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:55:58.199031    3308 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 10:55:58.200217    3308 out.go:298] Setting JSON to false
	I0802 10:55:58.220839    3308 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3322,"bootTime":1722618036,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 10:55:58.220923    3308 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 10:55:58.226207    3308 out.go:177] * [ha-982000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 10:55:58.234138    3308 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 10:55:58.234170    3308 notify.go:220] Checking for updates...
	I0802 10:55:58.238136    3308 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 10:55:58.241180    3308 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 10:55:58.245040    3308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 10:55:58.248137    3308 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 10:55:58.251121    3308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 10:55:58.254448    3308 config.go:182] Loaded profile config "ha-982000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 10:55:58.254505    3308 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 10:55:58.259098    3308 out.go:177] * Using the qemu2 driver based on existing profile
	I0802 10:55:58.266165    3308 start.go:297] selected driver: qemu2
	I0802 10:55:58.266173    3308 start.go:901] validating driver "qemu2" against &{Name:ha-982000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-982000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 10:55:58.266255    3308 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 10:55:58.269042    3308 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 10:55:58.269063    3308 cni.go:84] Creating CNI manager for ""
	I0802 10:55:58.269068    3308 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0802 10:55:58.269114    3308 start.go:340] cluster config:
	{Name:ha-982000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-982000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 10:55:58.273339    3308 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 10:55:58.282128    3308 out.go:177] * Starting "ha-982000" primary control-plane node in "ha-982000" cluster
	I0802 10:55:58.286126    3308 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 10:55:58.286143    3308 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 10:55:58.286157    3308 cache.go:56] Caching tarball of preloaded images
	I0802 10:55:58.286244    3308 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 10:55:58.286252    3308 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 10:55:58.286323    3308 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/ha-982000/config.json ...
	I0802 10:55:58.286778    3308 start.go:360] acquireMachinesLock for ha-982000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 10:55:58.286814    3308 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "ha-982000"
	I0802 10:55:58.286822    3308 start.go:96] Skipping create...Using existing machine configuration
	I0802 10:55:58.286830    3308 fix.go:54] fixHost starting: 
	I0802 10:55:58.286947    3308 fix.go:112] recreateIfNeeded on ha-982000: state=Stopped err=<nil>
	W0802 10:55:58.286955    3308 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 10:55:58.291162    3308 out.go:177] * Restarting existing qemu2 VM for "ha-982000" ...
	I0802 10:55:58.299121    3308 qemu.go:418] Using hvf for hardware acceleration
	I0802 10:55:58.299160    3308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:b9:a8:83:8d:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/disk.qcow2
	I0802 10:55:58.301408    3308 main.go:141] libmachine: STDOUT: 
	I0802 10:55:58.301430    3308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 10:55:58.301459    3308 fix.go:56] duration metric: took 14.6305ms for fixHost
	I0802 10:55:58.301464    3308 start.go:83] releasing machines lock for "ha-982000", held for 14.64675ms
	W0802 10:55:58.301471    3308 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 10:55:58.301512    3308 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 10:55:58.301517    3308 start.go:729] Will try again in 5 seconds ...
	I0802 10:56:03.303579    3308 start.go:360] acquireMachinesLock for ha-982000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 10:56:03.303945    3308 start.go:364] duration metric: took 289.375µs to acquireMachinesLock for "ha-982000"
	I0802 10:56:03.304061    3308 start.go:96] Skipping create...Using existing machine configuration
	I0802 10:56:03.304078    3308 fix.go:54] fixHost starting: 
	I0802 10:56:03.304781    3308 fix.go:112] recreateIfNeeded on ha-982000: state=Stopped err=<nil>
	W0802 10:56:03.304812    3308 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 10:56:03.312139    3308 out.go:177] * Restarting existing qemu2 VM for "ha-982000" ...
	I0802 10:56:03.315149    3308 qemu.go:418] Using hvf for hardware acceleration
	I0802 10:56:03.315368    3308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:b9:a8:83:8d:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/disk.qcow2
	I0802 10:56:03.323940    3308 main.go:141] libmachine: STDOUT: 
	I0802 10:56:03.323998    3308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 10:56:03.324058    3308 fix.go:56] duration metric: took 19.981167ms for fixHost
	I0802 10:56:03.324074    3308 start.go:83] releasing machines lock for "ha-982000", held for 20.108125ms
	W0802 10:56:03.324217    3308 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-982000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-982000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 10:56:03.332447    3308 out.go:177] 
	W0802 10:56:03.337314    3308 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 10:56:03.337350    3308 out.go:239] * 
	* 
	W0802 10:56:03.339740    3308 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 10:56:03.352130    3308 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-982000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-982000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000: exit status 7 (33.317709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-982000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-982000 node delete m03 -v=7 --alsologtostderr: exit status 83 (37.372458ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-982000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-982000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 10:56:03.487605    3324 out.go:291] Setting OutFile to fd 1 ...
	I0802 10:56:03.487804    3324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:56:03.487807    3324 out.go:304] Setting ErrFile to fd 2...
	I0802 10:56:03.487809    3324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:56:03.487937    3324 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 10:56:03.488144    3324 mustload.go:65] Loading cluster: ha-982000
	I0802 10:56:03.488354    3324 config.go:182] Loaded profile config "ha-982000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0802 10:56:03.488653    3324 out.go:239] ! The control-plane node ha-982000 host is not running (will try others): state=Stopped
	! The control-plane node ha-982000 host is not running (will try others): state=Stopped
	W0802 10:56:03.488751    3324 out.go:239] ! The control-plane node ha-982000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-982000-m02 host is not running (will try others): state=Stopped
	I0802 10:56:03.492233    3324 out.go:177] * The control-plane node ha-982000-m03 host is not running: state=Stopped
	I0802 10:56:03.495060    3324 out.go:177]   To start a cluster, run: "minikube start -p ha-982000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-982000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr: exit status 7 (30.088625ms)

                                                
                                                
-- stdout --
	ha-982000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-982000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-982000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-982000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 10:56:03.526907    3326 out.go:291] Setting OutFile to fd 1 ...
	I0802 10:56:03.527062    3326 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:56:03.527065    3326 out.go:304] Setting ErrFile to fd 2...
	I0802 10:56:03.527068    3326 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:56:03.527206    3326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 10:56:03.527326    3326 out.go:298] Setting JSON to false
	I0802 10:56:03.527335    3326 mustload.go:65] Loading cluster: ha-982000
	I0802 10:56:03.527404    3326 notify.go:220] Checking for updates...
	I0802 10:56:03.527557    3326 config.go:182] Loaded profile config "ha-982000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 10:56:03.527565    3326 status.go:255] checking status of ha-982000 ...
	I0802 10:56:03.527783    3326 status.go:330] ha-982000 host status = "Stopped" (err=<nil>)
	I0802 10:56:03.527786    3326 status.go:343] host is not running, skipping remaining checks
	I0802 10:56:03.527789    3326 status.go:257] ha-982000 status: &{Name:ha-982000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 10:56:03.527798    3326 status.go:255] checking status of ha-982000-m02 ...
	I0802 10:56:03.527889    3326 status.go:330] ha-982000-m02 host status = "Stopped" (err=<nil>)
	I0802 10:56:03.527891    3326 status.go:343] host is not running, skipping remaining checks
	I0802 10:56:03.527893    3326 status.go:257] ha-982000-m02 status: &{Name:ha-982000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 10:56:03.527897    3326 status.go:255] checking status of ha-982000-m03 ...
	I0802 10:56:03.527986    3326 status.go:330] ha-982000-m03 host status = "Stopped" (err=<nil>)
	I0802 10:56:03.527989    3326 status.go:343] host is not running, skipping remaining checks
	I0802 10:56:03.527991    3326 status.go:257] ha-982000-m03 status: &{Name:ha-982000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 10:56:03.527994    3326 status.go:255] checking status of ha-982000-m04 ...
	I0802 10:56:03.528088    3326 status.go:330] ha-982000-m04 host status = "Stopped" (err=<nil>)
	I0802 10:56:03.528091    3326 status.go:343] host is not running, skipping remaining checks
	I0802 10:56:03.528097    3326 status.go:257] ha-982000-m04 status: &{Name:ha-982000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000: exit status 7 (29.028833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-982000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-982000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-982000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-982000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-982000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000: exit status 7 (57.227917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-982000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 stop -v=7 --alsologtostderr
E0802 10:57:15.030267    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
E0802 10:58:38.095003    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-982000 stop -v=7 --alsologtostderr: (3m21.979618542s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr: exit status 7 (64.63225ms)

                                                
                                                
-- stdout --
	ha-982000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-982000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-982000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-982000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 10:59:26.637814    3383 out.go:291] Setting OutFile to fd 1 ...
	I0802 10:59:26.638023    3383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:59:26.638027    3383 out.go:304] Setting ErrFile to fd 2...
	I0802 10:59:26.638030    3383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:59:26.638212    3383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 10:59:26.638374    3383 out.go:298] Setting JSON to false
	I0802 10:59:26.638386    3383 mustload.go:65] Loading cluster: ha-982000
	I0802 10:59:26.638424    3383 notify.go:220] Checking for updates...
	I0802 10:59:26.639267    3383 config.go:182] Loaded profile config "ha-982000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 10:59:26.639295    3383 status.go:255] checking status of ha-982000 ...
	I0802 10:59:26.639788    3383 status.go:330] ha-982000 host status = "Stopped" (err=<nil>)
	I0802 10:59:26.639794    3383 status.go:343] host is not running, skipping remaining checks
	I0802 10:59:26.639797    3383 status.go:257] ha-982000 status: &{Name:ha-982000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 10:59:26.639811    3383 status.go:255] checking status of ha-982000-m02 ...
	I0802 10:59:26.639935    3383 status.go:330] ha-982000-m02 host status = "Stopped" (err=<nil>)
	I0802 10:59:26.639938    3383 status.go:343] host is not running, skipping remaining checks
	I0802 10:59:26.639941    3383 status.go:257] ha-982000-m02 status: &{Name:ha-982000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 10:59:26.639946    3383 status.go:255] checking status of ha-982000-m03 ...
	I0802 10:59:26.640070    3383 status.go:330] ha-982000-m03 host status = "Stopped" (err=<nil>)
	I0802 10:59:26.640074    3383 status.go:343] host is not running, skipping remaining checks
	I0802 10:59:26.640076    3383 status.go:257] ha-982000-m03 status: &{Name:ha-982000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 10:59:26.640080    3383 status.go:255] checking status of ha-982000-m04 ...
	I0802 10:59:26.640200    3383 status.go:330] ha-982000-m04 host status = "Stopped" (err=<nil>)
	I0802 10:59:26.640204    3383 status.go:343] host is not running, skipping remaining checks
	I0802 10:59:26.640207    3383 status.go:257] ha-982000-m04 status: &{Name:ha-982000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr": ha-982000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-982000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-982000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-982000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr": ha-982000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-982000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-982000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-982000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr": ha-982000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-982000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-982000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-982000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000: exit status 7 (31.619708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-982000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-982000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-982000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.181133125s)

                                                
                                                
-- stdout --
	* [ha-982000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-982000" primary control-plane node in "ha-982000" cluster
	* Restarting existing qemu2 VM for "ha-982000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-982000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 10:59:26.700047    3387 out.go:291] Setting OutFile to fd 1 ...
	I0802 10:59:26.700170    3387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:59:26.700172    3387 out.go:304] Setting ErrFile to fd 2...
	I0802 10:59:26.700175    3387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:59:26.700299    3387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 10:59:26.701325    3387 out.go:298] Setting JSON to false
	I0802 10:59:26.717454    3387 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3530,"bootTime":1722618036,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 10:59:26.717548    3387 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 10:59:26.722351    3387 out.go:177] * [ha-982000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 10:59:26.729186    3387 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 10:59:26.729239    3387 notify.go:220] Checking for updates...
	I0802 10:59:26.737138    3387 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 10:59:26.740178    3387 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 10:59:26.743183    3387 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 10:59:26.746188    3387 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 10:59:26.749145    3387 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 10:59:26.752472    3387 config.go:182] Loaded profile config "ha-982000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 10:59:26.752752    3387 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 10:59:26.757167    3387 out.go:177] * Using the qemu2 driver based on existing profile
	I0802 10:59:26.764189    3387 start.go:297] selected driver: qemu2
	I0802 10:59:26.764194    3387 start.go:901] validating driver "qemu2" against &{Name:ha-982000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-982000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 10:59:26.764277    3387 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 10:59:26.766490    3387 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 10:59:26.766509    3387 cni.go:84] Creating CNI manager for ""
	I0802 10:59:26.766514    3387 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0802 10:59:26.766564    3387 start.go:340] cluster config:
	{Name:ha-982000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-982000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 10:59:26.770198    3387 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 10:59:26.779205    3387 out.go:177] * Starting "ha-982000" primary control-plane node in "ha-982000" cluster
	I0802 10:59:26.783110    3387 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 10:59:26.783124    3387 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 10:59:26.783135    3387 cache.go:56] Caching tarball of preloaded images
	I0802 10:59:26.783192    3387 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 10:59:26.783198    3387 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 10:59:26.783271    3387 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/ha-982000/config.json ...
	I0802 10:59:26.783703    3387 start.go:360] acquireMachinesLock for ha-982000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 10:59:26.783740    3387 start.go:364] duration metric: took 31.25µs to acquireMachinesLock for "ha-982000"
	I0802 10:59:26.783748    3387 start.go:96] Skipping create...Using existing machine configuration
	I0802 10:59:26.783756    3387 fix.go:54] fixHost starting: 
	I0802 10:59:26.783877    3387 fix.go:112] recreateIfNeeded on ha-982000: state=Stopped err=<nil>
	W0802 10:59:26.783885    3387 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 10:59:26.787186    3387 out.go:177] * Restarting existing qemu2 VM for "ha-982000" ...
	I0802 10:59:26.795164    3387 qemu.go:418] Using hvf for hardware acceleration
	I0802 10:59:26.795207    3387 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:b9:a8:83:8d:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/disk.qcow2
	I0802 10:59:26.797255    3387 main.go:141] libmachine: STDOUT: 
	I0802 10:59:26.797276    3387 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 10:59:26.797303    3387 fix.go:56] duration metric: took 13.548875ms for fixHost
	I0802 10:59:26.797310    3387 start.go:83] releasing machines lock for "ha-982000", held for 13.565208ms
	W0802 10:59:26.797316    3387 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 10:59:26.797352    3387 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 10:59:26.797357    3387 start.go:729] Will try again in 5 seconds ...
	I0802 10:59:31.799456    3387 start.go:360] acquireMachinesLock for ha-982000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 10:59:31.800092    3387 start.go:364] duration metric: took 506.292µs to acquireMachinesLock for "ha-982000"
	I0802 10:59:31.800213    3387 start.go:96] Skipping create...Using existing machine configuration
	I0802 10:59:31.800234    3387 fix.go:54] fixHost starting: 
	I0802 10:59:31.801011    3387 fix.go:112] recreateIfNeeded on ha-982000: state=Stopped err=<nil>
	W0802 10:59:31.801039    3387 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 10:59:31.805938    3387 out.go:177] * Restarting existing qemu2 VM for "ha-982000" ...
	I0802 10:59:31.813664    3387 qemu.go:418] Using hvf for hardware acceleration
	I0802 10:59:31.813905    3387 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:b9:a8:83:8d:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/ha-982000/disk.qcow2
	I0802 10:59:31.823245    3387 main.go:141] libmachine: STDOUT: 
	I0802 10:59:31.823309    3387 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 10:59:31.823385    3387 fix.go:56] duration metric: took 23.154375ms for fixHost
	I0802 10:59:31.823407    3387 start.go:83] releasing machines lock for "ha-982000", held for 23.291416ms
	W0802 10:59:31.823557    3387 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-982000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-982000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 10:59:31.830612    3387 out.go:177] 
	W0802 10:59:31.833742    3387 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 10:59:31.833782    3387 out.go:239] * 
	* 
	W0802 10:59:31.836360    3387 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 10:59:31.846605    3387 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-982000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000: exit status 7 (67.967833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-982000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-982000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-982000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-982000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-982000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000: exit status 7 (28.926417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-982000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-982000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-982000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.592916ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-982000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-982000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 10:59:32.032849    3404 out.go:291] Setting OutFile to fd 1 ...
	I0802 10:59:32.032995    3404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:59:32.032998    3404 out.go:304] Setting ErrFile to fd 2...
	I0802 10:59:32.033000    3404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:59:32.033115    3404 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 10:59:32.033343    3404 mustload.go:65] Loading cluster: ha-982000
	I0802 10:59:32.033547    3404 config.go:182] Loaded profile config "ha-982000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0802 10:59:32.033856    3404 out.go:239] ! The control-plane node ha-982000 host is not running (will try others): state=Stopped
	! The control-plane node ha-982000 host is not running (will try others): state=Stopped
	W0802 10:59:32.033953    3404 out.go:239] ! The control-plane node ha-982000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-982000-m02 host is not running (will try others): state=Stopped
	I0802 10:59:32.038072    3404 out.go:177] * The control-plane node ha-982000-m03 host is not running: state=Stopped
	I0802 10:59:32.041996    3404 out.go:177]   To start a cluster, run: "minikube start -p ha-982000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-982000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-982000 -n ha-982000: exit status 7 (30.015709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-982000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-962000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-962000 --driver=qemu2 : exit status 80 (9.854883375s)

                                                
                                                
-- stdout --
	* [image-962000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-962000" primary control-plane node in "image-962000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-962000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-962000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-962000 -n image-962000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-962000 -n image-962000: exit status 7 (68.385666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-962000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-566000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-566000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.885056542s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c0b73b09-d0ab-461b-8fdd-8396eb9bfdae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-566000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a57ac2d5-d29e-4970-86a7-a4db220f753c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19355"}}
	{"specversion":"1.0","id":"fba716d8-9f28-41e4-8e20-d897dae0b8de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig"}}
	{"specversion":"1.0","id":"148569e0-c23e-4cf1-b918-253e4335191c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"3fed20ba-2a35-48da-be5a-6bf60776448c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"41c6932f-822a-4ee3-b796-23ee7752093d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube"}}
	{"specversion":"1.0","id":"184eabfe-5837-4f9a-96bb-3288041ff3e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"03a0d006-f41a-4862-bc15-aaba42ad0742","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3ba8844-0d79-45b5-90e0-eaa2fe44a247","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"e14c27de-4929-461f-8aa4-c939749c821d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-566000\" primary control-plane node in \"json-output-566000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"51bda683-9d69-43de-bac1-a0e8971f0100","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"a03c004b-ee9c-4d8d-9439-294bedc08ab0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-566000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"b424bbaf-055e-46a1-a828-12a4a67e6e9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"1600850e-32c1-46ca-bd72-675fda8cfb14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"2baa59f4-d8e7-40c3-9370-aaf90cec5637","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-566000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"19f6ed63-f4d3-440c-9b26-f9bd7dcefd01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"887db39f-eaca-4894-9fa9-d1594c4e42f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-566000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.89s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-566000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-566000 --output=json --user=testUser: exit status 83 (78.295792ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"84e99dac-564d-4b52-9555-2ecab24723e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-566000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"8b71b078-5e63-42a2-964c-dba503d74658","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-566000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-566000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-566000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-566000 --output=json --user=testUser: exit status 83 (42.406875ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-566000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-566000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-566000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-566000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-029000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-029000 --driver=qemu2 : exit status 80 (9.882339541s)

                                                
                                                
-- stdout --
	* [first-029000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-029000" primary control-plane node in "first-029000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-029000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-029000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-02 11:00:05.778582 -0700 PDT m=+2077.289736501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-031000 -n second-031000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-031000 -n second-031000: exit status 85 (79.671292ms)

                                                
                                                
-- stdout --
	* Profile "second-031000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-031000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-031000" host is not running, skipping log retrieval (state="* Profile \"second-031000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-031000\"")
helpers_test.go:175: Cleaning up "second-031000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-031000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-02 11:00:05.960434 -0700 PDT m=+2077.471595792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-029000 -n first-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-029000 -n first-029000: exit status 7 (30.191709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-029000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-029000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-029000
--- FAIL: TestMinikubeProfile (10.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-441000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-441000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.989681667s)

                                                
                                                
-- stdout --
	* [mount-start-1-441000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-441000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-441000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-441000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-441000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-441000 -n mount-start-1-441000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-441000 -n mount-start-1-441000: exit status 7 (68.77825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-441000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-325000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-325000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.063676041s)

                                                
                                                
-- stdout --
	* [multinode-325000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-325000" primary control-plane node in "multinode-325000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-325000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:00:16.329308    3860 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:00:16.329433    3860 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:00:16.329436    3860 out.go:304] Setting ErrFile to fd 2...
	I0802 11:00:16.329439    3860 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:00:16.329567    3860 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:00:16.330593    3860 out.go:298] Setting JSON to false
	I0802 11:00:16.346587    3860 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3580,"bootTime":1722618036,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:00:16.346662    3860 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:00:16.353437    3860 out.go:177] * [multinode-325000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:00:16.360404    3860 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:00:16.360443    3860 notify.go:220] Checking for updates...
	I0802 11:00:16.367363    3860 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:00:16.370344    3860 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:00:16.373392    3860 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:00:16.376314    3860 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:00:16.379373    3860 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:00:16.382578    3860 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:00:16.386336    3860 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:00:16.393378    3860 start.go:297] selected driver: qemu2
	I0802 11:00:16.393386    3860 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:00:16.393394    3860 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:00:16.395712    3860 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:00:16.397357    3860 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:00:16.400423    3860 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:00:16.400459    3860 cni.go:84] Creating CNI manager for ""
	I0802 11:00:16.400465    3860 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0802 11:00:16.400470    3860 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0802 11:00:16.400503    3860 start.go:340] cluster config:
	{Name:multinode-325000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:00:16.404215    3860 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:00:16.412400    3860 out.go:177] * Starting "multinode-325000" primary control-plane node in "multinode-325000" cluster
	I0802 11:00:16.416322    3860 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:00:16.416340    3860 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:00:16.416350    3860 cache.go:56] Caching tarball of preloaded images
	I0802 11:00:16.416405    3860 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:00:16.416411    3860 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:00:16.416621    3860 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/multinode-325000/config.json ...
	I0802 11:00:16.416633    3860 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/multinode-325000/config.json: {Name:mk80ba967a414ada098cae21dd4d74380877f62f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:00:16.416859    3860 start.go:360] acquireMachinesLock for multinode-325000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:00:16.416896    3860 start.go:364] duration metric: took 31.292µs to acquireMachinesLock for "multinode-325000"
	I0802 11:00:16.416908    3860 start.go:93] Provisioning new machine with config: &{Name:multinode-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:00:16.416938    3860 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:00:16.421316    3860 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 11:00:16.438405    3860 start.go:159] libmachine.API.Create for "multinode-325000" (driver="qemu2")
	I0802 11:00:16.438435    3860 client.go:168] LocalClient.Create starting
	I0802 11:00:16.438493    3860 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:00:16.438522    3860 main.go:141] libmachine: Decoding PEM data...
	I0802 11:00:16.438531    3860 main.go:141] libmachine: Parsing certificate...
	I0802 11:00:16.438570    3860 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:00:16.438592    3860 main.go:141] libmachine: Decoding PEM data...
	I0802 11:00:16.438603    3860 main.go:141] libmachine: Parsing certificate...
	I0802 11:00:16.438922    3860 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:00:16.589160    3860 main.go:141] libmachine: Creating SSH key...
	I0802 11:00:16.759390    3860 main.go:141] libmachine: Creating Disk image...
	I0802 11:00:16.759397    3860 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:00:16.759610    3860 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/disk.qcow2
	I0802 11:00:16.768943    3860 main.go:141] libmachine: STDOUT: 
	I0802 11:00:16.768963    3860 main.go:141] libmachine: STDERR: 
	I0802 11:00:16.769018    3860 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/disk.qcow2 +20000M
	I0802 11:00:16.776745    3860 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:00:16.776759    3860 main.go:141] libmachine: STDERR: 
	I0802 11:00:16.776771    3860 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/disk.qcow2
	I0802 11:00:16.776774    3860 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:00:16.776787    3860 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:00:16.776815    3860 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:49:bb:61:19:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/disk.qcow2
	I0802 11:00:16.778412    3860 main.go:141] libmachine: STDOUT: 
	I0802 11:00:16.778429    3860 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:00:16.778448    3860 client.go:171] duration metric: took 340.021791ms to LocalClient.Create
	I0802 11:00:18.780548    3860 start.go:128] duration metric: took 2.363676542s to createHost
	I0802 11:00:18.780675    3860 start.go:83] releasing machines lock for "multinode-325000", held for 2.363809875s
	W0802 11:00:18.780742    3860 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:00:18.794043    3860 out.go:177] * Deleting "multinode-325000" in qemu2 ...
	W0802 11:00:18.819633    3860 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:00:18.819656    3860 start.go:729] Will try again in 5 seconds ...
	I0802 11:00:23.821654    3860 start.go:360] acquireMachinesLock for multinode-325000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:00:23.822162    3860 start.go:364] duration metric: took 378.834µs to acquireMachinesLock for "multinode-325000"
	I0802 11:00:23.822293    3860 start.go:93] Provisioning new machine with config: &{Name:multinode-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:00:23.822614    3860 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:00:23.828153    3860 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 11:00:23.879197    3860 start.go:159] libmachine.API.Create for "multinode-325000" (driver="qemu2")
	I0802 11:00:23.879254    3860 client.go:168] LocalClient.Create starting
	I0802 11:00:23.879373    3860 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:00:23.879431    3860 main.go:141] libmachine: Decoding PEM data...
	I0802 11:00:23.879446    3860 main.go:141] libmachine: Parsing certificate...
	I0802 11:00:23.879499    3860 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:00:23.879546    3860 main.go:141] libmachine: Decoding PEM data...
	I0802 11:00:23.879560    3860 main.go:141] libmachine: Parsing certificate...
	I0802 11:00:23.880363    3860 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:00:24.045775    3860 main.go:141] libmachine: Creating SSH key...
	I0802 11:00:24.295433    3860 main.go:141] libmachine: Creating Disk image...
	I0802 11:00:24.295442    3860 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:00:24.295628    3860 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/disk.qcow2
	I0802 11:00:24.304811    3860 main.go:141] libmachine: STDOUT: 
	I0802 11:00:24.304840    3860 main.go:141] libmachine: STDERR: 
	I0802 11:00:24.304899    3860 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/disk.qcow2 +20000M
	I0802 11:00:24.312682    3860 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:00:24.312696    3860 main.go:141] libmachine: STDERR: 
	I0802 11:00:24.312713    3860 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/disk.qcow2
	I0802 11:00:24.312721    3860 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:00:24.312735    3860 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:00:24.312768    3860 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:d7:9f:ce:5e:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/disk.qcow2
	I0802 11:00:24.314325    3860 main.go:141] libmachine: STDOUT: 
	I0802 11:00:24.314338    3860 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:00:24.314353    3860 client.go:171] duration metric: took 435.111209ms to LocalClient.Create
	I0802 11:00:26.316234    3860 start.go:128] duration metric: took 2.493673083s to createHost
	I0802 11:00:26.316337    3860 start.go:83] releasing machines lock for "multinode-325000", held for 2.494205792s
	W0802 11:00:26.316681    3860 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-325000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-325000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:00:26.331250    3860 out.go:177] 
	W0802 11:00:26.335264    3860 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:00:26.335289    3860 out.go:239] * 
	* 
	W0802 11:00:26.337684    3860 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:00:26.350203    3860 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-325000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000: exit status 7 (65.8905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (85.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (126.951541ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-325000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- rollout status deployment/busybox: exit status 1 (56.472333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.652584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.158875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.814375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.037375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0802 11:00:32.808653    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.960709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.100708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.902ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.5205ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.534083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.292375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.731ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.268916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.340167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.132916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000: exit status 7 (30.113ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (85.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-325000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.665ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000: exit status 7 (30.264917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-325000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-325000 -v 3 --alsologtostderr: exit status 83 (42.495542ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-325000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-325000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:01:51.877494    3978 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:01:51.877658    3978 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:01:51.877664    3978 out.go:304] Setting ErrFile to fd 2...
	I0802 11:01:51.877667    3978 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:01:51.877794    3978 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:01:51.878029    3978 mustload.go:65] Loading cluster: multinode-325000
	I0802 11:01:51.878200    3978 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:01:51.883168    3978 out.go:177] * The control-plane node multinode-325000 host is not running: state=Stopped
	I0802 11:01:51.888083    3978 out.go:177]   To start a cluster, run: "minikube start -p multinode-325000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-325000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000: exit status 7 (29.65775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-325000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-325000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.749625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-325000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-325000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-325000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000: exit status 7 (29.409208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-325000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-325000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-325000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-325000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000: exit status 7 (29.670625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 status --output json --alsologtostderr: exit status 7 (29.308375ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-325000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:01:52.083531    3990 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:01:52.083696    3990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:01:52.083699    3990 out.go:304] Setting ErrFile to fd 2...
	I0802 11:01:52.083702    3990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:01:52.083830    3990 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:01:52.083945    3990 out.go:298] Setting JSON to true
	I0802 11:01:52.083953    3990 mustload.go:65] Loading cluster: multinode-325000
	I0802 11:01:52.084013    3990 notify.go:220] Checking for updates...
	I0802 11:01:52.084140    3990 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:01:52.084147    3990 status.go:255] checking status of multinode-325000 ...
	I0802 11:01:52.084372    3990 status.go:330] multinode-325000 host status = "Stopped" (err=<nil>)
	I0802 11:01:52.084377    3990 status.go:343] host is not running, skipping remaining checks
	I0802 11:01:52.084379    3990 status.go:257] multinode-325000 status: &{Name:multinode-325000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-325000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000: exit status 7 (29.78ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 node stop m03: exit status 85 (47.141959ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-325000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 status: exit status 7 (29.482708ms)

                                                
                                                
-- stdout --
	multinode-325000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 status --alsologtostderr: exit status 7 (30.144625ms)

                                                
                                                
-- stdout --
	multinode-325000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:01:52.220806    3998 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:01:52.220971    3998 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:01:52.220974    3998 out.go:304] Setting ErrFile to fd 2...
	I0802 11:01:52.220977    3998 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:01:52.221115    3998 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:01:52.221242    3998 out.go:298] Setting JSON to false
	I0802 11:01:52.221251    3998 mustload.go:65] Loading cluster: multinode-325000
	I0802 11:01:52.221325    3998 notify.go:220] Checking for updates...
	I0802 11:01:52.221443    3998 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:01:52.221450    3998 status.go:255] checking status of multinode-325000 ...
	I0802 11:01:52.221658    3998 status.go:330] multinode-325000 host status = "Stopped" (err=<nil>)
	I0802 11:01:52.221662    3998 status.go:343] host is not running, skipping remaining checks
	I0802 11:01:52.221665    3998 status.go:257] multinode-325000 status: &{Name:multinode-325000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-325000 status --alsologtostderr": multinode-325000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000: exit status 7 (30.292417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (43.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 node start m03 -v=7 --alsologtostderr: exit status 85 (44.78275ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:01:52.281282    4002 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:01:52.281515    4002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:01:52.281519    4002 out.go:304] Setting ErrFile to fd 2...
	I0802 11:01:52.281521    4002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:01:52.281655    4002 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:01:52.281871    4002 mustload.go:65] Loading cluster: multinode-325000
	I0802 11:01:52.282077    4002 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:01:52.285297    4002 out.go:177] 
	W0802 11:01:52.288166    4002 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0802 11:01:52.288171    4002 out.go:239] * 
	* 
	W0802 11:01:52.289725    4002 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:01:52.293133    4002 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0802 11:01:52.281282    4002 out.go:291] Setting OutFile to fd 1 ...
I0802 11:01:52.281515    4002 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 11:01:52.281519    4002 out.go:304] Setting ErrFile to fd 2...
I0802 11:01:52.281521    4002 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 11:01:52.281655    4002 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
I0802 11:01:52.281871    4002 mustload.go:65] Loading cluster: multinode-325000
I0802 11:01:52.282077    4002 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 11:01:52.285297    4002 out.go:177] 
W0802 11:01:52.288166    4002 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0802 11:01:52.288171    4002 out.go:239] * 
* 
W0802 11:01:52.289725    4002 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0802 11:01:52.293133    4002 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-325000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr: exit status 7 (29.434875ms)

                                                
                                                
-- stdout --
	multinode-325000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:01:52.325810    4004 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:01:52.325958    4004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:01:52.325961    4004 out.go:304] Setting ErrFile to fd 2...
	I0802 11:01:52.325964    4004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:01:52.326099    4004 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:01:52.326233    4004 out.go:298] Setting JSON to false
	I0802 11:01:52.326242    4004 mustload.go:65] Loading cluster: multinode-325000
	I0802 11:01:52.326308    4004 notify.go:220] Checking for updates...
	I0802 11:01:52.326449    4004 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:01:52.326460    4004 status.go:255] checking status of multinode-325000 ...
	I0802 11:01:52.326687    4004 status.go:330] multinode-325000 host status = "Stopped" (err=<nil>)
	I0802 11:01:52.326690    4004 status.go:343] host is not running, skipping remaining checks
	I0802 11:01:52.326693    4004 status.go:257] multinode-325000 status: &{Name:multinode-325000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr: exit status 7 (72.921625ms)

                                                
                                                
-- stdout --
	multinode-325000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:01:53.895142    4006 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:01:53.895361    4006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:01:53.895366    4006 out.go:304] Setting ErrFile to fd 2...
	I0802 11:01:53.895370    4006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:01:53.895579    4006 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:01:53.895754    4006 out.go:298] Setting JSON to false
	I0802 11:01:53.895767    4006 mustload.go:65] Loading cluster: multinode-325000
	I0802 11:01:53.895813    4006 notify.go:220] Checking for updates...
	I0802 11:01:53.896032    4006 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:01:53.896042    4006 status.go:255] checking status of multinode-325000 ...
	I0802 11:01:53.896327    4006 status.go:330] multinode-325000 host status = "Stopped" (err=<nil>)
	I0802 11:01:53.896332    4006 status.go:343] host is not running, skipping remaining checks
	I0802 11:01:53.896335    4006 status.go:257] multinode-325000 status: &{Name:multinode-325000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr: exit status 7 (72.8395ms)

                                                
                                                
-- stdout --
	multinode-325000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:01:55.869506    4008 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:01:55.869685    4008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:01:55.869689    4008 out.go:304] Setting ErrFile to fd 2...
	I0802 11:01:55.869693    4008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:01:55.869898    4008 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:01:55.870057    4008 out.go:298] Setting JSON to false
	I0802 11:01:55.870068    4008 mustload.go:65] Loading cluster: multinode-325000
	I0802 11:01:55.870112    4008 notify.go:220] Checking for updates...
	I0802 11:01:55.870350    4008 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:01:55.870359    4008 status.go:255] checking status of multinode-325000 ...
	I0802 11:01:55.870620    4008 status.go:330] multinode-325000 host status = "Stopped" (err=<nil>)
	I0802 11:01:55.870625    4008 status.go:343] host is not running, skipping remaining checks
	I0802 11:01:55.870628    4008 status.go:257] multinode-325000 status: &{Name:multinode-325000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr: exit status 7 (72.710959ms)

                                                
                                                
-- stdout --
	multinode-325000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:01:57.712695    4013 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:01:57.712861    4013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:01:57.712865    4013 out.go:304] Setting ErrFile to fd 2...
	I0802 11:01:57.712868    4013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:01:57.713054    4013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:01:57.713209    4013 out.go:298] Setting JSON to false
	I0802 11:01:57.713221    4013 mustload.go:65] Loading cluster: multinode-325000
	I0802 11:01:57.713267    4013 notify.go:220] Checking for updates...
	I0802 11:01:57.713522    4013 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:01:57.713531    4013 status.go:255] checking status of multinode-325000 ...
	I0802 11:01:57.713827    4013 status.go:330] multinode-325000 host status = "Stopped" (err=<nil>)
	I0802 11:01:57.713832    4013 status.go:343] host is not running, skipping remaining checks
	I0802 11:01:57.713835    4013 status.go:257] multinode-325000 status: &{Name:multinode-325000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr: exit status 7 (72.714958ms)

                                                
                                                
-- stdout --
	multinode-325000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:02:01.800997    4015 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:02:01.801242    4015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:01.801247    4015 out.go:304] Setting ErrFile to fd 2...
	I0802 11:02:01.801251    4015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:01.801468    4015 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:02:01.801653    4015 out.go:298] Setting JSON to false
	I0802 11:02:01.801666    4015 mustload.go:65] Loading cluster: multinode-325000
	I0802 11:02:01.801709    4015 notify.go:220] Checking for updates...
	I0802 11:02:01.801925    4015 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:02:01.801934    4015 status.go:255] checking status of multinode-325000 ...
	I0802 11:02:01.802212    4015 status.go:330] multinode-325000 host status = "Stopped" (err=<nil>)
	I0802 11:02:01.802217    4015 status.go:343] host is not running, skipping remaining checks
	I0802 11:02:01.802220    4015 status.go:257] multinode-325000 status: &{Name:multinode-325000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr: exit status 7 (75.050625ms)

                                                
                                                
-- stdout --
	multinode-325000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:02:06.914756    4019 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:02:06.914981    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:06.914985    4019 out.go:304] Setting ErrFile to fd 2...
	I0802 11:02:06.914988    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:06.915161    4019 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:02:06.915340    4019 out.go:298] Setting JSON to false
	I0802 11:02:06.915352    4019 mustload.go:65] Loading cluster: multinode-325000
	I0802 11:02:06.915394    4019 notify.go:220] Checking for updates...
	I0802 11:02:06.915615    4019 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:02:06.915624    4019 status.go:255] checking status of multinode-325000 ...
	I0802 11:02:06.915901    4019 status.go:330] multinode-325000 host status = "Stopped" (err=<nil>)
	I0802 11:02:06.915907    4019 status.go:343] host is not running, skipping remaining checks
	I0802 11:02:06.915910    4019 status.go:257] multinode-325000 status: &{Name:multinode-325000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr: exit status 7 (71.988417ms)

                                                
                                                
-- stdout --
	multinode-325000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:02:14.848974    4021 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:02:14.849172    4021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:14.849176    4021 out.go:304] Setting ErrFile to fd 2...
	I0802 11:02:14.849179    4021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:14.849356    4021 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:02:14.849507    4021 out.go:298] Setting JSON to false
	I0802 11:02:14.849519    4021 mustload.go:65] Loading cluster: multinode-325000
	I0802 11:02:14.849556    4021 notify.go:220] Checking for updates...
	I0802 11:02:14.849764    4021 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:02:14.849773    4021 status.go:255] checking status of multinode-325000 ...
	I0802 11:02:14.850051    4021 status.go:330] multinode-325000 host status = "Stopped" (err=<nil>)
	I0802 11:02:14.850056    4021 status.go:343] host is not running, skipping remaining checks
	I0802 11:02:14.850059    4021 status.go:257] multinode-325000 status: &{Name:multinode-325000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0802 11:02:15.029409    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr: exit status 7 (72.182833ms)

                                                
                                                
-- stdout --
	multinode-325000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:02:26.293042    4028 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:02:26.293255    4028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:26.293260    4028 out.go:304] Setting ErrFile to fd 2...
	I0802 11:02:26.293264    4028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:26.293487    4028 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:02:26.293645    4028 out.go:298] Setting JSON to false
	I0802 11:02:26.293659    4028 mustload.go:65] Loading cluster: multinode-325000
	I0802 11:02:26.293689    4028 notify.go:220] Checking for updates...
	I0802 11:02:26.293949    4028 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:02:26.293960    4028 status.go:255] checking status of multinode-325000 ...
	I0802 11:02:26.294245    4028 status.go:330] multinode-325000 host status = "Stopped" (err=<nil>)
	I0802 11:02:26.294251    4028 status.go:343] host is not running, skipping remaining checks
	I0802 11:02:26.294254    4028 status.go:257] multinode-325000 status: &{Name:multinode-325000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr: exit status 7 (73.05525ms)

                                                
                                                
-- stdout --
	multinode-325000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:02:36.047118    4032 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:02:36.047363    4032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:36.047367    4032 out.go:304] Setting ErrFile to fd 2...
	I0802 11:02:36.047370    4032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:36.047561    4032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:02:36.047738    4032 out.go:298] Setting JSON to false
	I0802 11:02:36.047751    4032 mustload.go:65] Loading cluster: multinode-325000
	I0802 11:02:36.047800    4032 notify.go:220] Checking for updates...
	I0802 11:02:36.048074    4032 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:02:36.048087    4032 status.go:255] checking status of multinode-325000 ...
	I0802 11:02:36.048374    4032 status.go:330] multinode-325000 host status = "Stopped" (err=<nil>)
	I0802 11:02:36.048380    4032 status.go:343] host is not running, skipping remaining checks
	I0802 11:02:36.048383    4032 status.go:257] multinode-325000 status: &{Name:multinode-325000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-325000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000: exit status 7 (32.949625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (43.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-325000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-325000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-325000: (3.0831165s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-325000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-325000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.223421917s)

                                                
                                                
-- stdout --
	* [multinode-325000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-325000" primary control-plane node in "multinode-325000" cluster
	* Restarting existing qemu2 VM for "multinode-325000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-325000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:02:39.257519    4060 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:02:39.257676    4060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:39.257680    4060 out.go:304] Setting ErrFile to fd 2...
	I0802 11:02:39.257684    4060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:39.257861    4060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:02:39.259131    4060 out.go:298] Setting JSON to false
	I0802 11:02:39.278444    4060 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3723,"bootTime":1722618036,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:02:39.278513    4060 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:02:39.283151    4060 out.go:177] * [multinode-325000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:02:39.290072    4060 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:02:39.290129    4060 notify.go:220] Checking for updates...
	I0802 11:02:39.296105    4060 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:02:39.298995    4060 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:02:39.302070    4060 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:02:39.305137    4060 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:02:39.311010    4060 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:02:39.314402    4060 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:02:39.314463    4060 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:02:39.319065    4060 out.go:177] * Using the qemu2 driver based on existing profile
	I0802 11:02:39.326094    4060 start.go:297] selected driver: qemu2
	I0802 11:02:39.326101    4060 start.go:901] validating driver "qemu2" against &{Name:multinode-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:02:39.326178    4060 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:02:39.328736    4060 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:02:39.328780    4060 cni.go:84] Creating CNI manager for ""
	I0802 11:02:39.328785    4060 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0802 11:02:39.328842    4060 start.go:340] cluster config:
	{Name:multinode-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-325000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:02:39.332781    4060 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:02:39.340089    4060 out.go:177] * Starting "multinode-325000" primary control-plane node in "multinode-325000" cluster
	I0802 11:02:39.344029    4060 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:02:39.344047    4060 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:02:39.344061    4060 cache.go:56] Caching tarball of preloaded images
	I0802 11:02:39.344146    4060 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:02:39.344153    4060 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:02:39.344210    4060 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/multinode-325000/config.json ...
	I0802 11:02:39.344636    4060 start.go:360] acquireMachinesLock for multinode-325000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:02:39.344675    4060 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "multinode-325000"
	I0802 11:02:39.344686    4060 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:02:39.344692    4060 fix.go:54] fixHost starting: 
	I0802 11:02:39.344821    4060 fix.go:112] recreateIfNeeded on multinode-325000: state=Stopped err=<nil>
	W0802 11:02:39.344831    4060 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:02:39.352051    4060 out.go:177] * Restarting existing qemu2 VM for "multinode-325000" ...
	I0802 11:02:39.356056    4060 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:02:39.356104    4060 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:d7:9f:ce:5e:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/disk.qcow2
	I0802 11:02:39.358295    4060 main.go:141] libmachine: STDOUT: 
	I0802 11:02:39.358317    4060 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:02:39.358347    4060 fix.go:56] duration metric: took 13.656625ms for fixHost
	I0802 11:02:39.358352    4060 start.go:83] releasing machines lock for "multinode-325000", held for 13.672875ms
	W0802 11:02:39.358372    4060 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:02:39.358404    4060 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:02:39.358409    4060 start.go:729] Will try again in 5 seconds ...
	I0802 11:02:44.360446    4060 start.go:360] acquireMachinesLock for multinode-325000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:02:44.360817    4060 start.go:364] duration metric: took 278µs to acquireMachinesLock for "multinode-325000"
	I0802 11:02:44.360925    4060 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:02:44.360945    4060 fix.go:54] fixHost starting: 
	I0802 11:02:44.361610    4060 fix.go:112] recreateIfNeeded on multinode-325000: state=Stopped err=<nil>
	W0802 11:02:44.361640    4060 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:02:44.367165    4060 out.go:177] * Restarting existing qemu2 VM for "multinode-325000" ...
	I0802 11:02:44.374110    4060 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:02:44.374311    4060 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:d7:9f:ce:5e:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/disk.qcow2
	I0802 11:02:44.383135    4060 main.go:141] libmachine: STDOUT: 
	I0802 11:02:44.383239    4060 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:02:44.383357    4060 fix.go:56] duration metric: took 22.407583ms for fixHost
	I0802 11:02:44.383375    4060 start.go:83] releasing machines lock for "multinode-325000", held for 22.533334ms
	W0802 11:02:44.383547    4060 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-325000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-325000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:02:44.391034    4060 out.go:177] 
	W0802 11:02:44.395158    4060 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:02:44.395182    4060 out.go:239] * 
	* 
	W0802 11:02:44.397784    4060 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:02:44.406112    4060 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-325000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-325000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000: exit status 7 (32.283541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 node delete m03: exit status 83 (39.139416ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-325000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-325000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-325000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 status --alsologtostderr: exit status 7 (29.380625ms)

                                                
                                                
-- stdout --
	multinode-325000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:02:44.588829    4077 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:02:44.588946    4077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:44.588949    4077 out.go:304] Setting ErrFile to fd 2...
	I0802 11:02:44.588952    4077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:44.589086    4077 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:02:44.589206    4077 out.go:298] Setting JSON to false
	I0802 11:02:44.589215    4077 mustload.go:65] Loading cluster: multinode-325000
	I0802 11:02:44.589277    4077 notify.go:220] Checking for updates...
	I0802 11:02:44.589412    4077 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:02:44.589419    4077 status.go:255] checking status of multinode-325000 ...
	I0802 11:02:44.589626    4077 status.go:330] multinode-325000 host status = "Stopped" (err=<nil>)
	I0802 11:02:44.589629    4077 status.go:343] host is not running, skipping remaining checks
	I0802 11:02:44.589632    4077 status.go:257] multinode-325000 status: &{Name:multinode-325000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-325000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000: exit status 7 (29.044375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-325000 stop: (3.230198541s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 status: exit status 7 (61.22875ms)

                                                
                                                
-- stdout --
	multinode-325000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-325000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-325000 status --alsologtostderr: exit status 7 (32.712291ms)

                                                
                                                
-- stdout --
	multinode-325000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:02:47.942391    4105 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:02:47.942527    4105 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:47.942531    4105 out.go:304] Setting ErrFile to fd 2...
	I0802 11:02:47.942534    4105 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:47.942670    4105 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:02:47.942789    4105 out.go:298] Setting JSON to false
	I0802 11:02:47.942799    4105 mustload.go:65] Loading cluster: multinode-325000
	I0802 11:02:47.942849    4105 notify.go:220] Checking for updates...
	I0802 11:02:47.943008    4105 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:02:47.943017    4105 status.go:255] checking status of multinode-325000 ...
	I0802 11:02:47.943261    4105 status.go:330] multinode-325000 host status = "Stopped" (err=<nil>)
	I0802 11:02:47.943265    4105 status.go:343] host is not running, skipping remaining checks
	I0802 11:02:47.943270    4105 status.go:257] multinode-325000 status: &{Name:multinode-325000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-325000 status --alsologtostderr": multinode-325000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-325000 status --alsologtostderr": multinode-325000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000: exit status 7 (28.308042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-325000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-325000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.178034333s)

                                                
                                                
-- stdout --
	* [multinode-325000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-325000" primary control-plane node in "multinode-325000" cluster
	* Restarting existing qemu2 VM for "multinode-325000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-325000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:02:47.999547    4109 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:02:47.999677    4109 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:47.999680    4109 out.go:304] Setting ErrFile to fd 2...
	I0802 11:02:47.999683    4109 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:02:47.999811    4109 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:02:48.000977    4109 out.go:298] Setting JSON to false
	I0802 11:02:48.017715    4109 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3732,"bootTime":1722618036,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:02:48.017791    4109 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:02:48.023296    4109 out.go:177] * [multinode-325000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:02:48.032317    4109 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:02:48.032360    4109 notify.go:220] Checking for updates...
	I0802 11:02:48.039229    4109 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:02:48.043308    4109 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:02:48.046272    4109 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:02:48.049258    4109 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:02:48.052223    4109 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:02:48.055506    4109 config.go:182] Loaded profile config "multinode-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:02:48.055796    4109 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:02:48.059177    4109 out.go:177] * Using the qemu2 driver based on existing profile
	I0802 11:02:48.066204    4109 start.go:297] selected driver: qemu2
	I0802 11:02:48.066213    4109 start.go:901] validating driver "qemu2" against &{Name:multinode-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:02:48.066266    4109 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:02:48.068716    4109 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:02:48.068759    4109 cni.go:84] Creating CNI manager for ""
	I0802 11:02:48.068765    4109 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0802 11:02:48.068810    4109 start.go:340] cluster config:
	{Name:multinode-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-325000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:02:48.072456    4109 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:02:48.081217    4109 out.go:177] * Starting "multinode-325000" primary control-plane node in "multinode-325000" cluster
	I0802 11:02:48.085266    4109 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:02:48.085280    4109 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:02:48.085290    4109 cache.go:56] Caching tarball of preloaded images
	I0802 11:02:48.085349    4109 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:02:48.085355    4109 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:02:48.085405    4109 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/multinode-325000/config.json ...
	I0802 11:02:48.085839    4109 start.go:360] acquireMachinesLock for multinode-325000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:02:48.085878    4109 start.go:364] duration metric: took 31.916µs to acquireMachinesLock for "multinode-325000"
	I0802 11:02:48.085887    4109 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:02:48.085894    4109 fix.go:54] fixHost starting: 
	I0802 11:02:48.086022    4109 fix.go:112] recreateIfNeeded on multinode-325000: state=Stopped err=<nil>
	W0802 11:02:48.086031    4109 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:02:48.090249    4109 out.go:177] * Restarting existing qemu2 VM for "multinode-325000" ...
	I0802 11:02:48.094277    4109 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:02:48.094319    4109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:d7:9f:ce:5e:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/disk.qcow2
	I0802 11:02:48.096487    4109 main.go:141] libmachine: STDOUT: 
	I0802 11:02:48.096516    4109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:02:48.096544    4109 fix.go:56] duration metric: took 10.650834ms for fixHost
	I0802 11:02:48.096550    4109 start.go:83] releasing machines lock for "multinode-325000", held for 10.667584ms
	W0802 11:02:48.096557    4109 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:02:48.096593    4109 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:02:48.096598    4109 start.go:729] Will try again in 5 seconds ...
	I0802 11:02:53.098660    4109 start.go:360] acquireMachinesLock for multinode-325000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:02:53.099065    4109 start.go:364] duration metric: took 311.167µs to acquireMachinesLock for "multinode-325000"
	I0802 11:02:53.099188    4109 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:02:53.099208    4109 fix.go:54] fixHost starting: 
	I0802 11:02:53.099955    4109 fix.go:112] recreateIfNeeded on multinode-325000: state=Stopped err=<nil>
	W0802 11:02:53.099986    4109 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:02:53.104425    4109 out.go:177] * Restarting existing qemu2 VM for "multinode-325000" ...
	I0802 11:02:53.108426    4109 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:02:53.108651    4109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:d7:9f:ce:5e:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/multinode-325000/disk.qcow2
	I0802 11:02:53.117515    4109 main.go:141] libmachine: STDOUT: 
	I0802 11:02:53.117581    4109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:02:53.117668    4109 fix.go:56] duration metric: took 18.46125ms for fixHost
	I0802 11:02:53.117690    4109 start.go:83] releasing machines lock for "multinode-325000", held for 18.605083ms
	W0802 11:02:53.117921    4109 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-325000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-325000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:02:53.124409    4109 out.go:177] 
	W0802 11:02:53.127417    4109 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:02:53.127442    4109 out.go:239] * 
	* 
	W0802 11:02:53.129951    4109 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:02:53.137406    4109 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-325000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000: exit status 7 (67.9315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-325000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-325000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-325000-m01 --driver=qemu2 : exit status 80 (9.871319791s)

                                                
                                                
-- stdout --
	* [multinode-325000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-325000-m01" primary control-plane node in "multinode-325000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-325000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-325000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-325000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-325000-m02 --driver=qemu2 : exit status 80 (10.077653s)

                                                
                                                
-- stdout --
	* [multinode-325000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-325000-m02" primary control-plane node in "multinode-325000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-325000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-325000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-325000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-325000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-325000: exit status 83 (75.907667ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-325000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-325000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-325000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-325000 -n multinode-325000: exit status 7 (29.971333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.17s)

                                                
                                    
x
+
TestPreload (10.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-895000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-895000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.859860292s)

                                                
                                                
-- stdout --
	* [test-preload-895000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-895000" primary control-plane node in "test-preload-895000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-895000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:03:13.523923    4167 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:03:13.524052    4167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:03:13.524056    4167 out.go:304] Setting ErrFile to fd 2...
	I0802 11:03:13.524058    4167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:03:13.524188    4167 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:03:13.525212    4167 out.go:298] Setting JSON to false
	I0802 11:03:13.541211    4167 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3757,"bootTime":1722618036,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:03:13.541289    4167 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:03:13.546001    4167 out.go:177] * [test-preload-895000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:03:13.553929    4167 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:03:13.553993    4167 notify.go:220] Checking for updates...
	I0802 11:03:13.560999    4167 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:03:13.563923    4167 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:03:13.566933    4167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:03:13.569922    4167 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:03:13.572869    4167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:03:13.576187    4167 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:03:13.576244    4167 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:03:13.579895    4167 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:03:13.586896    4167 start.go:297] selected driver: qemu2
	I0802 11:03:13.586902    4167 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:03:13.586908    4167 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:03:13.589032    4167 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:03:13.591932    4167 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:03:13.594986    4167 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:03:13.595031    4167 cni.go:84] Creating CNI manager for ""
	I0802 11:03:13.595041    4167 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:03:13.595052    4167 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 11:03:13.595084    4167 start.go:340] cluster config:
	{Name:test-preload-895000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-895000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:03:13.598678    4167 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:03:13.605909    4167 out.go:177] * Starting "test-preload-895000" primary control-plane node in "test-preload-895000" cluster
	I0802 11:03:13.609866    4167 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0802 11:03:13.609953    4167 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/test-preload-895000/config.json ...
	I0802 11:03:13.609944    4167 cache.go:107] acquiring lock: {Name:mk3115db8876b96740ef61c362e182fe6c315e12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:03:13.609952    4167 cache.go:107] acquiring lock: {Name:mkb6baaecb91c2cbf6ca9738e8ed8311a994b60a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:03:13.609969    4167 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/test-preload-895000/config.json: {Name:mkae8ccf712bb137ad98862b3dee0e846029db91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:03:13.609964    4167 cache.go:107] acquiring lock: {Name:mk70a8cbb3d3f30a129f5e5d7e9a9ad11341a688 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:03:13.609983    4167 cache.go:107] acquiring lock: {Name:mkc9661d296c523a5ee08c8c56e821727b88c18e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:03:13.610096    4167 cache.go:107] acquiring lock: {Name:mk45e6779937bba9d994e5f2e36e8bd049474931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:03:13.610212    4167 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0802 11:03:13.610225    4167 start.go:360] acquireMachinesLock for test-preload-895000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:03:13.610230    4167 cache.go:107] acquiring lock: {Name:mkd3d0fd82052a9627be885084f50ff7a08e83a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:03:13.610299    4167 start.go:364] duration metric: took 61.125µs to acquireMachinesLock for "test-preload-895000"
	I0802 11:03:13.610313    4167 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0802 11:03:13.610324    4167 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0802 11:03:13.610376    4167 cache.go:107] acquiring lock: {Name:mk2ce17ae670488f301007bc7650f56467ca43f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:03:13.610380    4167 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0802 11:03:13.610422    4167 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:03:13.610456    4167 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0802 11:03:13.610347    4167 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:03:13.610332    4167 start.go:93] Provisioning new machine with config: &{Name:test-preload-895000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-895000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:03:13.610493    4167 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:03:13.609949    4167 cache.go:107] acquiring lock: {Name:mk73d743a5386ebb6e3441928a380a1d8ff4de2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:03:13.611036    4167 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0802 11:03:13.617921    4167 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 11:03:13.622611    4167 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0802 11:03:13.623895    4167 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:03:13.624112    4167 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0802 11:03:13.624117    4167 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0802 11:03:13.624157    4167 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0802 11:03:13.626021    4167 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:03:13.626076    4167 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0802 11:03:13.626296    4167 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0802 11:03:13.636129    4167 start.go:159] libmachine.API.Create for "test-preload-895000" (driver="qemu2")
	I0802 11:03:13.636151    4167 client.go:168] LocalClient.Create starting
	I0802 11:03:13.636226    4167 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:03:13.636257    4167 main.go:141] libmachine: Decoding PEM data...
	I0802 11:03:13.636268    4167 main.go:141] libmachine: Parsing certificate...
	I0802 11:03:13.636307    4167 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:03:13.636331    4167 main.go:141] libmachine: Decoding PEM data...
	I0802 11:03:13.636339    4167 main.go:141] libmachine: Parsing certificate...
	I0802 11:03:13.636737    4167 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:03:13.788395    4167 main.go:141] libmachine: Creating SSH key...
	I0802 11:03:13.929106    4167 main.go:141] libmachine: Creating Disk image...
	I0802 11:03:13.929129    4167 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:03:13.932903    4167 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/disk.qcow2
	I0802 11:03:13.945718    4167 main.go:141] libmachine: STDOUT: 
	I0802 11:03:13.945756    4167 main.go:141] libmachine: STDERR: 
	I0802 11:03:13.945829    4167 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/disk.qcow2 +20000M
	I0802 11:03:13.955226    4167 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:03:13.955248    4167 main.go:141] libmachine: STDERR: 
	I0802 11:03:13.955263    4167 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/disk.qcow2
	I0802 11:03:13.955266    4167 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:03:13.955281    4167 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:03:13.955306    4167 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:7c:94:17:66:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/disk.qcow2
	I0802 11:03:13.957228    4167 main.go:141] libmachine: STDOUT: 
	I0802 11:03:13.957255    4167 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:03:13.957272    4167 client.go:171] duration metric: took 321.126ms to LocalClient.Create
	I0802 11:03:14.149329    4167 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0802 11:03:14.194662    4167 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0802 11:03:14.241059    4167 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0802 11:03:14.241081    4167 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0802 11:03:14.253713    4167 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0802 11:03:14.290847    4167 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0802 11:03:14.384084    4167 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0802 11:03:14.399589    4167 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0802 11:03:14.434092    4167 cache.go:157] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0802 11:03:14.434144    4167 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 824.203292ms
	I0802 11:03:14.434178    4167 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0802 11:03:15.592532    4167 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0802 11:03:15.592664    4167 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0802 11:03:15.799893    4167 cache.go:157] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0802 11:03:15.799945    4167 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.190073042s
	I0802 11:03:15.799992    4167 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0802 11:03:15.853610    4167 cache.go:157] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0802 11:03:15.853668    4167 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.243757125s
	I0802 11:03:15.853694    4167 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0802 11:03:15.957512    4167 start.go:128] duration metric: took 2.347041708s to createHost
	I0802 11:03:15.957558    4167 start.go:83] releasing machines lock for "test-preload-895000", held for 2.347322166s
	W0802 11:03:15.957612    4167 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:03:15.965570    4167 out.go:177] * Deleting "test-preload-895000" in qemu2 ...
	W0802 11:03:15.996097    4167 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:03:15.996124    4167 start.go:729] Will try again in 5 seconds ...
	I0802 11:03:17.494723    4167 cache.go:157] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0802 11:03:17.494775    4167 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.884730083s
	I0802 11:03:17.494821    4167 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0802 11:03:18.180082    4167 cache.go:157] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0802 11:03:18.180129    4167 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.57033575s
	I0802 11:03:18.180162    4167 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0802 11:03:19.019825    4167 cache.go:157] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0802 11:03:19.019869    4167 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.409967583s
	I0802 11:03:19.019912    4167 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0802 11:03:19.312473    4167 cache.go:157] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0802 11:03:19.312517    4167 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.702762333s
	I0802 11:03:19.312549    4167 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0802 11:03:20.996251    4167 start.go:360] acquireMachinesLock for test-preload-895000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:03:20.996650    4167 start.go:364] duration metric: took 323.417µs to acquireMachinesLock for "test-preload-895000"
	I0802 11:03:20.996770    4167 start.go:93] Provisioning new machine with config: &{Name:test-preload-895000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-895000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:03:20.997054    4167 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:03:21.006788    4167 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 11:03:21.057528    4167 start.go:159] libmachine.API.Create for "test-preload-895000" (driver="qemu2")
	I0802 11:03:21.057612    4167 client.go:168] LocalClient.Create starting
	I0802 11:03:21.057740    4167 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:03:21.057805    4167 main.go:141] libmachine: Decoding PEM data...
	I0802 11:03:21.057818    4167 main.go:141] libmachine: Parsing certificate...
	I0802 11:03:21.057888    4167 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:03:21.057932    4167 main.go:141] libmachine: Decoding PEM data...
	I0802 11:03:21.057943    4167 main.go:141] libmachine: Parsing certificate...
	I0802 11:03:21.058398    4167 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:03:21.219928    4167 main.go:141] libmachine: Creating SSH key...
	I0802 11:03:21.290108    4167 main.go:141] libmachine: Creating Disk image...
	I0802 11:03:21.290121    4167 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:03:21.290320    4167 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/disk.qcow2
	I0802 11:03:21.299554    4167 main.go:141] libmachine: STDOUT: 
	I0802 11:03:21.299574    4167 main.go:141] libmachine: STDERR: 
	I0802 11:03:21.299617    4167 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/disk.qcow2 +20000M
	I0802 11:03:21.307693    4167 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:03:21.307718    4167 main.go:141] libmachine: STDERR: 
	I0802 11:03:21.307734    4167 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/disk.qcow2
	I0802 11:03:21.307739    4167 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:03:21.307746    4167 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:03:21.307782    4167 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:e6:3c:d0:a1:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/test-preload-895000/disk.qcow2
	I0802 11:03:21.309504    4167 main.go:141] libmachine: STDOUT: 
	I0802 11:03:21.309525    4167 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:03:21.309540    4167 client.go:171] duration metric: took 251.9315ms to LocalClient.Create
	I0802 11:03:23.309876    4167 start.go:128] duration metric: took 2.312852708s to createHost
	I0802 11:03:23.309940    4167 start.go:83] releasing machines lock for "test-preload-895000", held for 2.313346s
	W0802 11:03:23.310215    4167 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-895000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-895000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:03:23.324747    4167 out.go:177] 
	W0802 11:03:23.327872    4167 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:03:23.327912    4167 out.go:239] * 
	* 
	W0802 11:03:23.330588    4167 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:03:23.340843    4167 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-895000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-02 11:03:23.358042 -0700 PDT m=+2274.866720876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-895000 -n test-preload-895000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-895000 -n test-preload-895000: exit status 7 (65.632416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-895000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-895000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-895000
--- FAIL: TestPreload (10.01s)

                                                
                                    
x
+
TestScheduledStopUnix (10.1s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-776000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-776000 --memory=2048 --driver=qemu2 : exit status 80 (9.95739s)

                                                
                                                
-- stdout --
	* [scheduled-stop-776000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-776000" primary control-plane node in "scheduled-stop-776000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-776000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-776000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-776000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-776000" primary control-plane node in "scheduled-stop-776000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-776000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-776000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-02 11:03:33.460609 -0700 PDT m=+2284.969640501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-776000 -n scheduled-stop-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-776000 -n scheduled-stop-776000: exit status 7 (67.303041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-776000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-776000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-776000
--- FAIL: TestScheduledStopUnix (10.10s)

                                                
                                    
x
+
TestSkaffold (12.25s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe874319008 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe874319008 version: (1.070177917s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-030000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-030000 --memory=2600 --driver=qemu2 : exit status 80 (9.848156667s)

                                                
                                                
-- stdout --
	* [skaffold-030000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-030000" primary control-plane node in "skaffold-030000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-030000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-030000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-030000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-030000" primary control-plane node in "skaffold-030000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-030000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-030000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-02 11:03:45.712418 -0700 PDT m=+2297.221880417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-030000 -n skaffold-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-030000 -n skaffold-030000: exit status 7 (63.269084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-030000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-030000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-030000
--- FAIL: TestSkaffold (12.25s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (610.45s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2673407984 start -p running-upgrade-894000 --memory=2200 --vm-driver=qemu2 
E0802 11:05:32.807780    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2673407984 start -p running-upgrade-894000 --memory=2200 --vm-driver=qemu2 : (1m10.591851833s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-894000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0802 11:07:15.019868    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-894000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m25.477986584s)

                                                
                                                
-- stdout --
	* [running-upgrade-894000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-894000" primary control-plane node in "running-upgrade-894000" cluster
	* Updating the running qemu2 "running-upgrade-894000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:05:39.972923    4562 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:05:39.973064    4562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:05:39.973067    4562 out.go:304] Setting ErrFile to fd 2...
	I0802 11:05:39.973070    4562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:05:39.973208    4562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:05:39.974209    4562 out.go:298] Setting JSON to false
	I0802 11:05:39.990845    4562 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3903,"bootTime":1722618036,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:05:39.990919    4562 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:05:39.996559    4562 out.go:177] * [running-upgrade-894000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:05:40.003558    4562 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:05:40.003601    4562 notify.go:220] Checking for updates...
	I0802 11:05:40.010540    4562 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:05:40.014511    4562 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:05:40.017590    4562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:05:40.020564    4562 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:05:40.023516    4562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:05:40.026737    4562 config.go:182] Loaded profile config "running-upgrade-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:05:40.029565    4562 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0802 11:05:40.030860    4562 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:05:40.035551    4562 out.go:177] * Using the qemu2 driver based on existing profile
	I0802 11:05:40.039881    4562 start.go:297] selected driver: qemu2
	I0802 11:05:40.039887    4562 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50312 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0802 11:05:40.039933    4562 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:05:40.042241    4562 cni.go:84] Creating CNI manager for ""
	I0802 11:05:40.042257    4562 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:05:40.042283    4562 start.go:340] cluster config:
	{Name:running-upgrade-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50312 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0802 11:05:40.042335    4562 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:05:40.048504    4562 out.go:177] * Starting "running-upgrade-894000" primary control-plane node in "running-upgrade-894000" cluster
	I0802 11:05:40.052525    4562 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0802 11:05:40.052537    4562 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0802 11:05:40.052547    4562 cache.go:56] Caching tarball of preloaded images
	I0802 11:05:40.052597    4562 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:05:40.052601    4562 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0802 11:05:40.052643    4562 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/config.json ...
	I0802 11:05:40.053104    4562 start.go:360] acquireMachinesLock for running-upgrade-894000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:05:40.053134    4562 start.go:364] duration metric: took 25.125µs to acquireMachinesLock for "running-upgrade-894000"
	I0802 11:05:40.053141    4562 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:05:40.053147    4562 fix.go:54] fixHost starting: 
	I0802 11:05:40.053700    4562 fix.go:112] recreateIfNeeded on running-upgrade-894000: state=Running err=<nil>
	W0802 11:05:40.053708    4562 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:05:40.061490    4562 out.go:177] * Updating the running qemu2 "running-upgrade-894000" VM ...
	I0802 11:05:40.065545    4562 machine.go:94] provisionDockerMachine start ...
	I0802 11:05:40.065576    4562 main.go:141] libmachine: Using SSH client type: native
	I0802 11:05:40.065673    4562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b22a10] 0x102b25270 <nil>  [] 0s} localhost 50280 <nil> <nil>}
	I0802 11:05:40.065677    4562 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 11:05:40.120641    4562 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-894000
	
	I0802 11:05:40.120657    4562 buildroot.go:166] provisioning hostname "running-upgrade-894000"
	I0802 11:05:40.120707    4562 main.go:141] libmachine: Using SSH client type: native
	I0802 11:05:40.120829    4562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b22a10] 0x102b25270 <nil>  [] 0s} localhost 50280 <nil> <nil>}
	I0802 11:05:40.120834    4562 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-894000 && echo "running-upgrade-894000" | sudo tee /etc/hostname
	I0802 11:05:40.174870    4562 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-894000
	
	I0802 11:05:40.174923    4562 main.go:141] libmachine: Using SSH client type: native
	I0802 11:05:40.175040    4562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b22a10] 0x102b25270 <nil>  [] 0s} localhost 50280 <nil> <nil>}
	I0802 11:05:40.175050    4562 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-894000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-894000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-894000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 11:05:40.227038    4562 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 11:05:40.227049    4562 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19355-1243/.minikube CaCertPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19355-1243/.minikube}
	I0802 11:05:40.227059    4562 buildroot.go:174] setting up certificates
	I0802 11:05:40.227067    4562 provision.go:84] configureAuth start
	I0802 11:05:40.227073    4562 provision.go:143] copyHostCerts
	I0802 11:05:40.227125    4562 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.pem, removing ...
	I0802 11:05:40.227131    4562 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.pem
	I0802 11:05:40.227305    4562 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.pem (1078 bytes)
	I0802 11:05:40.227504    4562 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-1243/.minikube/cert.pem, removing ...
	I0802 11:05:40.227508    4562 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-1243/.minikube/cert.pem
	I0802 11:05:40.227571    4562 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19355-1243/.minikube/cert.pem (1123 bytes)
	I0802 11:05:40.227687    4562 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-1243/.minikube/key.pem, removing ...
	I0802 11:05:40.227691    4562 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-1243/.minikube/key.pem
	I0802 11:05:40.227739    4562 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19355-1243/.minikube/key.pem (1675 bytes)
	I0802 11:05:40.227841    4562 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-894000 san=[127.0.0.1 localhost minikube running-upgrade-894000]
	I0802 11:05:40.302654    4562 provision.go:177] copyRemoteCerts
	I0802 11:05:40.302687    4562 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 11:05:40.302693    4562 sshutil.go:53] new ssh client: &{IP:localhost Port:50280 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/running-upgrade-894000/id_rsa Username:docker}
	I0802 11:05:40.330447    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 11:05:40.337939    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0802 11:05:40.345336    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0802 11:05:40.351696    4562 provision.go:87] duration metric: took 124.62925ms to configureAuth
	I0802 11:05:40.351705    4562 buildroot.go:189] setting minikube options for container-runtime
	I0802 11:05:40.351817    4562 config.go:182] Loaded profile config "running-upgrade-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:05:40.351852    4562 main.go:141] libmachine: Using SSH client type: native
	I0802 11:05:40.351943    4562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b22a10] 0x102b25270 <nil>  [] 0s} localhost 50280 <nil> <nil>}
	I0802 11:05:40.351949    4562 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0802 11:05:40.404556    4562 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0802 11:05:40.404564    4562 buildroot.go:70] root file system type: tmpfs
	I0802 11:05:40.404614    4562 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0802 11:05:40.404664    4562 main.go:141] libmachine: Using SSH client type: native
	I0802 11:05:40.404783    4562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b22a10] 0x102b25270 <nil>  [] 0s} localhost 50280 <nil> <nil>}
	I0802 11:05:40.404815    4562 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0802 11:05:40.459228    4562 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0802 11:05:40.459270    4562 main.go:141] libmachine: Using SSH client type: native
	I0802 11:05:40.459371    4562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b22a10] 0x102b25270 <nil>  [] 0s} localhost 50280 <nil> <nil>}
	I0802 11:05:40.459381    4562 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0802 11:05:40.512416    4562 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 11:05:40.512424    4562 machine.go:97] duration metric: took 446.888875ms to provisionDockerMachine
	I0802 11:05:40.512431    4562 start.go:293] postStartSetup for "running-upgrade-894000" (driver="qemu2")
	I0802 11:05:40.512439    4562 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 11:05:40.512485    4562 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 11:05:40.512493    4562 sshutil.go:53] new ssh client: &{IP:localhost Port:50280 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/running-upgrade-894000/id_rsa Username:docker}
	I0802 11:05:40.540561    4562 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 11:05:40.542246    4562 info.go:137] Remote host: Buildroot 2021.02.12
	I0802 11:05:40.542252    4562 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19355-1243/.minikube/addons for local assets ...
	I0802 11:05:40.542331    4562 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19355-1243/.minikube/files for local assets ...
	I0802 11:05:40.542449    4562 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19355-1243/.minikube/files/etc/ssl/certs/17472.pem -> 17472.pem in /etc/ssl/certs
	I0802 11:05:40.542575    4562 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 11:05:40.545118    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/files/etc/ssl/certs/17472.pem --> /etc/ssl/certs/17472.pem (1708 bytes)
	I0802 11:05:40.552065    4562 start.go:296] duration metric: took 39.627375ms for postStartSetup
	I0802 11:05:40.552080    4562 fix.go:56] duration metric: took 498.953459ms for fixHost
	I0802 11:05:40.552113    4562 main.go:141] libmachine: Using SSH client type: native
	I0802 11:05:40.552218    4562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b22a10] 0x102b25270 <nil>  [] 0s} localhost 50280 <nil> <nil>}
	I0802 11:05:40.552223    4562 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0802 11:05:40.604382    4562 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722621940.705587607
	
	I0802 11:05:40.604392    4562 fix.go:216] guest clock: 1722621940.705587607
	I0802 11:05:40.604396    4562 fix.go:229] Guest: 2024-08-02 11:05:40.705587607 -0700 PDT Remote: 2024-08-02 11:05:40.552082 -0700 PDT m=+0.598765751 (delta=153.505607ms)
	I0802 11:05:40.604407    4562 fix.go:200] guest clock delta is within tolerance: 153.505607ms
	I0802 11:05:40.604410    4562 start.go:83] releasing machines lock for "running-upgrade-894000", held for 551.292083ms
	I0802 11:05:40.604476    4562 ssh_runner.go:195] Run: cat /version.json
	I0802 11:05:40.604485    4562 sshutil.go:53] new ssh client: &{IP:localhost Port:50280 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/running-upgrade-894000/id_rsa Username:docker}
	I0802 11:05:40.604476    4562 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 11:05:40.604528    4562 sshutil.go:53] new ssh client: &{IP:localhost Port:50280 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/running-upgrade-894000/id_rsa Username:docker}
	W0802 11:05:40.605061    4562 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50280: connect: connection refused
	I0802 11:05:40.605092    4562 retry.go:31] will retry after 261.860428ms: dial tcp [::1]:50280: connect: connection refused
	W0802 11:05:40.896970    4562 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0802 11:05:40.897055    4562 ssh_runner.go:195] Run: systemctl --version
	I0802 11:05:40.898905    4562 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 11:05:40.900733    4562 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 11:05:40.900759    4562 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0802 11:05:40.903396    4562 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0802 11:05:40.907705    4562 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 11:05:40.907711    4562 start.go:495] detecting cgroup driver to use...
	I0802 11:05:40.907781    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 11:05:40.912842    4562 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0802 11:05:40.915657    4562 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0802 11:05:40.918727    4562 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0802 11:05:40.918745    4562 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0802 11:05:40.922234    4562 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0802 11:05:40.925877    4562 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0802 11:05:40.929236    4562 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0802 11:05:40.932232    4562 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 11:05:40.935087    4562 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0802 11:05:40.938129    4562 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0802 11:05:40.942194    4562 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0802 11:05:40.945749    4562 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 11:05:40.948481    4562 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 11:05:40.951099    4562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:05:41.051862    4562 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0802 11:05:41.062189    4562 start.go:495] detecting cgroup driver to use...
	I0802 11:05:41.062256    4562 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0802 11:05:41.068249    4562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 11:05:41.073277    4562 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 11:05:41.083807    4562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 11:05:41.088225    4562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0802 11:05:41.092464    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 11:05:41.097774    4562 ssh_runner.go:195] Run: which cri-dockerd
	I0802 11:05:41.099092    4562 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0802 11:05:41.102182    4562 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0802 11:05:41.107587    4562 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0802 11:05:41.201706    4562 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0802 11:05:41.283820    4562 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0802 11:05:41.283884    4562 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0802 11:05:41.288865    4562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:05:41.383440    4562 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0802 11:05:44.112011    4562 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.728650917s)
	I0802 11:05:44.112071    4562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0802 11:05:44.117442    4562 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0802 11:05:44.124435    4562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0802 11:05:44.129133    4562 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0802 11:05:44.226705    4562 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0802 11:05:44.308733    4562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:05:44.387369    4562 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0802 11:05:44.393417    4562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0802 11:05:44.398086    4562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:05:44.483076    4562 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0802 11:05:44.522298    4562 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0802 11:05:44.522371    4562 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0802 11:05:44.524531    4562 start.go:563] Will wait 60s for crictl version
	I0802 11:05:44.524583    4562 ssh_runner.go:195] Run: which crictl
	I0802 11:05:44.525853    4562 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 11:05:44.537840    4562 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0802 11:05:44.537913    4562 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0802 11:05:44.550372    4562 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0802 11:05:44.569622    4562 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0802 11:05:44.569748    4562 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0802 11:05:44.571150    4562 kubeadm.go:883] updating cluster {Name:running-upgrade-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50312 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0802 11:05:44.571194    4562 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0802 11:05:44.571237    4562 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0802 11:05:44.581640    4562 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0802 11:05:44.581647    4562 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0802 11:05:44.581690    4562 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0802 11:05:44.585065    4562 ssh_runner.go:195] Run: which lz4
	I0802 11:05:44.586441    4562 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0802 11:05:44.587853    4562 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 11:05:44.587864    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0802 11:05:45.483685    4562 docker.go:649] duration metric: took 897.301709ms to copy over tarball
	I0802 11:05:45.483742    4562 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 11:05:46.739685    4562 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.255976083s)
	I0802 11:05:46.739698    4562 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 11:05:46.754935    4562 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0802 11:05:46.758341    4562 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0802 11:05:46.763443    4562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:05:46.846267    4562 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0802 11:05:48.032557    4562 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.186314542s)
	I0802 11:05:48.032657    4562 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0802 11:05:48.043821    4562 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0802 11:05:48.043831    4562 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0802 11:05:48.043836    4562 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0802 11:05:48.048265    4562 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0802 11:05:48.050487    4562 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:05:48.052595    4562 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0802 11:05:48.052677    4562 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0802 11:05:48.053832    4562 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0802 11:05:48.054052    4562 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:05:48.055602    4562 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0802 11:05:48.055833    4562 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0802 11:05:48.056602    4562 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0802 11:05:48.057177    4562 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0802 11:05:48.058257    4562 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0802 11:05:48.058330    4562 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0802 11:05:48.059341    4562 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0802 11:05:48.059371    4562 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:05:48.060341    4562 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0802 11:05:48.061175    4562 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:05:48.444421    4562 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0802 11:05:48.457866    4562 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0802 11:05:48.457895    4562 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0802 11:05:48.457947    4562 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0802 11:05:48.470826    4562 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0802 11:05:48.484455    4562 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0802 11:05:48.485084    4562 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0802 11:05:48.486267    4562 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0802 11:05:48.490206    4562 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0802 11:05:48.497192    4562 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0802 11:05:48.497211    4562 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0802 11:05:48.497268    4562 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0802 11:05:48.501184    4562 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0802 11:05:48.501204    4562 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0802 11:05:48.501253    4562 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0802 11:05:48.510235    4562 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0802 11:05:48.510254    4562 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0802 11:05:48.510312    4562 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0802 11:05:48.513212    4562 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0802 11:05:48.513229    4562 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0802 11:05:48.513271    4562 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0802 11:05:48.526552    4562 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0802 11:05:48.530971    4562 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0802 11:05:48.536171    4562 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0802 11:05:48.536182    4562 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0802 11:05:48.536242    4562 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0802 11:05:48.536279    4562 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0802 11:05:48.549370    4562 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0802 11:05:48.549388    4562 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0802 11:05:48.549437    4562 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0802 11:05:48.549439    4562 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0802 11:05:48.549449    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0802 11:05:48.557476    4562 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0802 11:05:48.557500    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0802 11:05:48.562112    4562 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0802 11:05:48.577027    4562 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0802 11:05:48.577168    4562 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:05:48.602673    4562 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0802 11:05:48.602698    4562 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:05:48.602754    4562 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0802 11:05:48.602755    4562 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:05:48.614066    4562 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0802 11:05:48.614187    4562 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0802 11:05:48.615663    4562 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0802 11:05:48.615674    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0802 11:05:48.658340    4562 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0802 11:05:48.658354    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0802 11:05:48.695990    4562 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0802 11:05:51.585986    4562 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0802 11:05:51.586627    4562 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:05:51.625252    4562 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0802 11:05:51.625294    4562 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:05:51.625402    4562 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:05:51.651881    4562 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0802 11:05:51.652065    4562 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0802 11:05:51.654249    4562 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0802 11:05:51.654268    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0802 11:05:51.687002    4562 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0802 11:05:51.687018    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0802 11:05:51.921714    4562 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0802 11:05:51.921755    4562 cache_images.go:92] duration metric: took 3.878042666s to LoadCachedImages
	W0802 11:05:51.921796    4562 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0802 11:05:51.921801    4562 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0802 11:05:51.921854    4562 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-894000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 11:05:51.921915    4562 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0802 11:05:51.935901    4562 cni.go:84] Creating CNI manager for ""
	I0802 11:05:51.935913    4562 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:05:51.935918    4562 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 11:05:51.935926    4562 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-894000 NodeName:running-upgrade-894000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 11:05:51.935994    4562 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-894000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 11:05:51.936057    4562 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0802 11:05:51.938886    4562 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 11:05:51.938911    4562 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 11:05:51.942000    4562 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0802 11:05:51.947280    4562 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 11:05:51.952005    4562 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0802 11:05:51.957183    4562 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0802 11:05:51.958604    4562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:05:52.049589    4562 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 11:05:52.054852    4562 certs.go:68] Setting up /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000 for IP: 10.0.2.15
	I0802 11:05:52.054860    4562 certs.go:194] generating shared ca certs ...
	I0802 11:05:52.054868    4562 certs.go:226] acquiring lock for ca certs: {Name:mkac8babaf2bcf8bb25aa8e1753c51c03330d7ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:05:52.055034    4562 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.key
	I0802 11:05:52.055079    4562 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/proxy-client-ca.key
	I0802 11:05:52.055087    4562 certs.go:256] generating profile certs ...
	I0802 11:05:52.055147    4562 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/client.key
	I0802 11:05:52.055164    4562 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/apiserver.key.b16287a9
	I0802 11:05:52.055175    4562 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/apiserver.crt.b16287a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0802 11:05:52.207392    4562 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/apiserver.crt.b16287a9 ...
	I0802 11:05:52.207398    4562 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/apiserver.crt.b16287a9: {Name:mkb7dd0d9fce45102fca0b0dd3943455a00591d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:05:52.207781    4562 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/apiserver.key.b16287a9 ...
	I0802 11:05:52.207792    4562 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/apiserver.key.b16287a9: {Name:mk47a2c855c4350057a94aeb47cde56b004dfa07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:05:52.207978    4562 certs.go:381] copying /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/apiserver.crt.b16287a9 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/apiserver.crt
	I0802 11:05:52.208133    4562 certs.go:385] copying /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/apiserver.key.b16287a9 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/apiserver.key
	I0802 11:05:52.208285    4562 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/proxy-client.key
	I0802 11:05:52.208420    4562 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/1747.pem (1338 bytes)
	W0802 11:05:52.208447    4562 certs.go:480] ignoring /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/1747_empty.pem, impossibly tiny 0 bytes
	I0802 11:05:52.208452    4562 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 11:05:52.208482    4562 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem (1078 bytes)
	I0802 11:05:52.208504    4562 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem (1123 bytes)
	I0802 11:05:52.208522    4562 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/key.pem (1675 bytes)
	I0802 11:05:52.208567    4562 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/files/etc/ssl/certs/17472.pem (1708 bytes)
	I0802 11:05:52.208880    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 11:05:52.215962    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0802 11:05:52.223060    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 11:05:52.229755    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 11:05:52.237396    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0802 11:05:52.244258    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 11:05:52.251986    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 11:05:52.258764    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 11:05:52.265971    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 11:05:52.273050    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/1747.pem --> /usr/share/ca-certificates/1747.pem (1338 bytes)
	I0802 11:05:52.279880    4562 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/files/etc/ssl/certs/17472.pem --> /usr/share/ca-certificates/17472.pem (1708 bytes)
	I0802 11:05:52.286420    4562 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 11:05:52.291355    4562 ssh_runner.go:195] Run: openssl version
	I0802 11:05:52.293142    4562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 11:05:52.296119    4562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 11:05:52.297499    4562 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:26 /usr/share/ca-certificates/minikubeCA.pem
	I0802 11:05:52.297519    4562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 11:05:52.299246    4562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 11:05:52.302118    4562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1747.pem && ln -fs /usr/share/ca-certificates/1747.pem /etc/ssl/certs/1747.pem"
	I0802 11:05:52.305488    4562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1747.pem
	I0802 11:05:52.306814    4562 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:35 /usr/share/ca-certificates/1747.pem
	I0802 11:05:52.306836    4562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1747.pem
	I0802 11:05:52.308595    4562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1747.pem /etc/ssl/certs/51391683.0"
	I0802 11:05:52.311182    4562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17472.pem && ln -fs /usr/share/ca-certificates/17472.pem /etc/ssl/certs/17472.pem"
	I0802 11:05:52.314425    4562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17472.pem
	I0802 11:05:52.315693    4562 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:35 /usr/share/ca-certificates/17472.pem
	I0802 11:05:52.315718    4562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17472.pem
	I0802 11:05:52.317469    4562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 11:05:52.320093    4562 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 11:05:52.321581    4562 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 11:05:52.323445    4562 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 11:05:52.325203    4562 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 11:05:52.327061    4562 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 11:05:52.329069    4562 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 11:05:52.331044    4562 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 11:05:52.332830    4562 kubeadm.go:392] StartCluster: {Name:running-upgrade-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50312 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0802 11:05:52.332894    4562 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0802 11:05:52.343013    4562 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 11:05:52.346546    4562 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0802 11:05:52.346551    4562 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0802 11:05:52.346575    4562 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0802 11:05:52.349809    4562 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0802 11:05:52.350065    4562 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-894000" does not appear in /Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:05:52.350116    4562 kubeconfig.go:62] /Users/jenkins/minikube-integration/19355-1243/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-894000" cluster setting kubeconfig missing "running-upgrade-894000" context setting]
	I0802 11:05:52.350250    4562 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/kubeconfig: {Name:mkee875f598bd0a8f78c04f09a48257e74d5dd54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:05:52.350957    4562 kapi.go:59] client config for running-upgrade-894000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/client.key", CAFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103eb81b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0802 11:05:52.351294    4562 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0802 11:05:52.354048    4562 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-894000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0802 11:05:52.354053    4562 kubeadm.go:1160] stopping kube-system containers ...
	I0802 11:05:52.354091    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0802 11:05:52.365053    4562 docker.go:483] Stopping containers: [78baef9bff76 32ac0abca4c8 27cb24e2108d fc786518af8a 68ac2873ee50 0db77f00880e 36fefc7bbc3f cf2e4f3ef5d9 08299215c7cc 08598abe7627 196bf2f95add e9b01549a648 6706f58a5d00 c936e4be50e9 4b3b72089323 85b3815c0eb9 9c912a5f7dbb aa745064275d af4333a6acb3 c3e452761fdd]
	I0802 11:05:52.365136    4562 ssh_runner.go:195] Run: docker stop 78baef9bff76 32ac0abca4c8 27cb24e2108d fc786518af8a 68ac2873ee50 0db77f00880e 36fefc7bbc3f cf2e4f3ef5d9 08299215c7cc 08598abe7627 196bf2f95add e9b01549a648 6706f58a5d00 c936e4be50e9 4b3b72089323 85b3815c0eb9 9c912a5f7dbb aa745064275d af4333a6acb3 c3e452761fdd
	I0802 11:05:52.913303    4562 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0802 11:05:53.004650    4562 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 11:05:53.007982    4562 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Aug  2 18:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Aug  2 18:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug  2 18:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Aug  2 18:05 /etc/kubernetes/scheduler.conf
	
	I0802 11:05:53.008023    4562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/admin.conf
	I0802 11:05:53.010878    4562 kubeadm.go:163] "https://control-plane.minikube.internal:50312" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0802 11:05:53.010909    4562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 11:05:53.014944    4562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/kubelet.conf
	I0802 11:05:53.020402    4562 kubeadm.go:163] "https://control-plane.minikube.internal:50312" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0802 11:05:53.020434    4562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 11:05:53.025505    4562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/controller-manager.conf
	I0802 11:05:53.029316    4562 kubeadm.go:163] "https://control-plane.minikube.internal:50312" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0802 11:05:53.029348    4562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 11:05:53.033635    4562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/scheduler.conf
	I0802 11:05:53.038753    4562 kubeadm.go:163] "https://control-plane.minikube.internal:50312" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0802 11:05:53.038784    4562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 11:05:53.043860    4562 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 11:05:53.049049    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 11:05:53.108202    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 11:05:53.731215    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0802 11:05:53.930227    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 11:05:53.958316    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0802 11:05:53.980456    4562 api_server.go:52] waiting for apiserver process to appear ...
	I0802 11:05:53.980532    4562 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 11:05:54.482830    4562 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 11:05:54.982023    4562 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 11:05:54.986157    4562 api_server.go:72] duration metric: took 1.005738542s to wait for apiserver process to appear ...
	I0802 11:05:54.986165    4562 api_server.go:88] waiting for apiserver healthz status ...
	I0802 11:05:54.986173    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:05:59.988096    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:05:59.988139    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:06:04.988400    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:06:04.988476    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:06:09.989149    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:06:09.989186    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:06:14.989799    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:06:14.989842    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:06:19.990726    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:06:19.990799    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:06:24.992369    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:06:24.992448    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:06:29.993173    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:06:29.993255    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:06:34.994893    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:06:34.994968    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:06:39.997382    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:06:39.997455    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:06:44.999829    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:06:44.999852    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:06:50.001936    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:06:50.002056    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:06:55.004291    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:06:55.004453    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:06:55.014945    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:06:55.015035    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:06:55.026080    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:06:55.026144    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:06:55.036711    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:06:55.036771    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:06:55.050753    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:06:55.050816    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:06:55.060931    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:06:55.060997    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:06:55.071594    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:06:55.071649    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:06:55.081600    4562 logs.go:276] 0 containers: []
	W0802 11:06:55.081609    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:06:55.081655    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:06:55.092002    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:06:55.092025    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:06:55.092031    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:06:55.103732    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:06:55.103749    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:06:55.116231    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:06:55.116246    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:06:55.127526    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:06:55.127540    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:06:55.138530    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:06:55.138542    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:06:55.155699    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:06:55.155709    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:06:55.166984    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:06:55.166996    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:06:55.203035    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:06:55.203043    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:06:55.206946    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:06:55.206951    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:06:55.220773    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:06:55.220786    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:06:55.234832    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:06:55.234846    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:06:55.303616    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:06:55.303628    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:06:55.316891    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:06:55.316902    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:06:55.328316    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:06:55.328327    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:06:55.340039    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:06:55.340047    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:06:55.351425    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:06:55.351435    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:06:55.362720    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:06:55.362746    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:06:57.889150    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:07:02.891182    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:07:02.891708    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:07:02.931594    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:07:02.931737    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:07:02.952310    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:07:02.952438    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:07:02.968035    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:07:02.968121    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:07:02.985165    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:07:02.985241    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:07:02.997045    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:07:02.997117    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:07:03.011262    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:07:03.011333    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:07:03.021740    4562 logs.go:276] 0 containers: []
	W0802 11:07:03.021750    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:07:03.021807    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:07:03.032683    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:07:03.032704    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:07:03.032710    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:07:03.046338    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:07:03.046349    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:07:03.057870    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:07:03.057881    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:07:03.068802    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:07:03.068811    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:07:03.080613    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:07:03.080625    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:07:03.091951    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:07:03.091962    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:07:03.130864    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:07:03.130873    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:07:03.168196    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:07:03.168209    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:07:03.179989    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:07:03.180001    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:07:03.192382    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:07:03.192394    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:07:03.197038    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:07:03.197045    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:07:03.211103    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:07:03.211116    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:07:03.223195    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:07:03.223210    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:07:03.240291    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:07:03.240302    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:07:03.266994    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:07:03.267000    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:07:03.287394    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:07:03.287403    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:07:03.308510    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:07:03.308524    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:07:05.821856    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:07:10.824251    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:07:10.824649    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:07:10.865057    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:07:10.865193    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:07:10.886650    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:07:10.886767    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:07:10.901484    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:07:10.901549    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:07:10.913965    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:07:10.914036    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:07:10.924887    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:07:10.924953    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:07:10.935495    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:07:10.935554    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:07:10.945486    4562 logs.go:276] 0 containers: []
	W0802 11:07:10.945500    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:07:10.945552    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:07:10.956039    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:07:10.956060    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:07:10.956065    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:07:10.970077    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:07:10.970089    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:07:10.982307    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:07:10.982319    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:07:10.994046    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:07:10.994059    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:07:11.005237    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:07:11.005248    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:07:11.017122    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:07:11.017137    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:07:11.021242    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:07:11.021250    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:07:11.034882    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:07:11.034893    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:07:11.046102    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:07:11.046113    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:07:11.091497    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:07:11.091509    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:07:11.105155    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:07:11.105166    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:07:11.117035    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:07:11.117046    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:07:11.134956    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:07:11.134968    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:07:11.173945    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:07:11.173951    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:07:11.188440    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:07:11.188449    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:07:11.200123    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:07:11.200133    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:07:11.212045    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:07:11.212054    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:07:13.740448    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:07:18.743115    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:07:18.743590    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:07:18.781811    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:07:18.781957    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:07:18.806792    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:07:18.806890    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:07:18.821519    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:07:18.821595    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:07:18.834159    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:07:18.834229    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:07:18.844463    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:07:18.844538    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:07:18.855417    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:07:18.855489    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:07:18.865853    4562 logs.go:276] 0 containers: []
	W0802 11:07:18.865865    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:07:18.865925    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:07:18.876868    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:07:18.876885    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:07:18.876891    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:07:18.888682    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:07:18.888696    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:07:18.900664    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:07:18.900677    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:07:18.926713    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:07:18.926724    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:07:18.931419    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:07:18.931425    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:07:18.976838    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:07:18.976848    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:07:18.990103    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:07:18.990113    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:07:19.005764    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:07:19.005774    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:07:19.017054    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:07:19.017066    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:07:19.028481    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:07:19.028491    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:07:19.046068    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:07:19.046076    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:07:19.057984    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:07:19.057995    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:07:19.069628    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:07:19.069639    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:07:19.081799    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:07:19.081811    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:07:19.120212    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:07:19.120219    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:07:19.134786    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:07:19.134798    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:07:19.147038    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:07:19.147051    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:07:21.662443    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:07:26.664942    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:07:26.665234    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:07:26.691823    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:07:26.691946    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:07:26.709575    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:07:26.709673    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:07:26.722604    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:07:26.722680    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:07:26.734506    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:07:26.734579    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:07:26.744968    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:07:26.745032    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:07:26.755623    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:07:26.755686    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:07:26.765858    4562 logs.go:276] 0 containers: []
	W0802 11:07:26.765869    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:07:26.765931    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:07:26.776643    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:07:26.776659    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:07:26.776665    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:07:26.781014    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:07:26.781021    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:07:26.792600    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:07:26.792613    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:07:26.831754    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:07:26.831763    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:07:26.846396    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:07:26.846408    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:07:26.858481    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:07:26.858491    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:07:26.871488    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:07:26.871500    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:07:26.885481    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:07:26.885493    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:07:26.897532    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:07:26.897545    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:07:26.908859    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:07:26.908869    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:07:26.920255    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:07:26.920264    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:07:26.944710    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:07:26.944719    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:07:26.979391    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:07:26.979402    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:07:26.993557    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:07:26.993567    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:07:27.004558    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:07:27.004569    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:07:27.016084    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:07:27.016097    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:07:27.033421    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:07:27.033434    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:07:29.547297    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:07:34.548042    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:07:34.548458    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:07:34.589355    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:07:34.589488    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:07:34.611674    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:07:34.611805    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:07:34.628143    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:07:34.628209    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:07:34.642990    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:07:34.643065    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:07:34.655546    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:07:34.655606    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:07:34.666862    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:07:34.666919    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:07:34.680524    4562 logs.go:276] 0 containers: []
	W0802 11:07:34.680537    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:07:34.680583    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:07:34.690881    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:07:34.690900    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:07:34.690905    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:07:34.709672    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:07:34.709687    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:07:34.727810    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:07:34.727818    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:07:34.741407    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:07:34.741420    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:07:34.753894    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:07:34.753904    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:07:34.765812    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:07:34.765823    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:07:34.777344    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:07:34.777355    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:07:34.792286    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:07:34.792297    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:07:34.804056    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:07:34.804069    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:07:34.808357    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:07:34.808365    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:07:34.844479    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:07:34.844492    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:07:34.857277    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:07:34.857288    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:07:34.895917    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:07:34.895924    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:07:34.907836    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:07:34.907848    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:07:34.932055    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:07:34.932063    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:07:34.945364    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:07:34.945378    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:07:34.959295    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:07:34.959306    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:07:37.472621    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:07:42.474020    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:07:42.474427    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:07:42.506983    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:07:42.507111    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:07:42.530441    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:07:42.530534    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:07:42.543455    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:07:42.543527    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:07:42.555469    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:07:42.555538    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:07:42.566110    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:07:42.566170    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:07:42.576298    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:07:42.576367    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:07:42.586751    4562 logs.go:276] 0 containers: []
	W0802 11:07:42.586766    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:07:42.586826    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:07:42.597061    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:07:42.597076    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:07:42.597080    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:07:42.608673    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:07:42.608685    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:07:42.626199    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:07:42.626213    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:07:42.642041    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:07:42.642054    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:07:42.654661    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:07:42.654671    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:07:42.680375    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:07:42.680385    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:07:42.714090    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:07:42.714104    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:07:42.729236    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:07:42.729245    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:07:42.741228    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:07:42.741239    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:07:42.755229    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:07:42.755242    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:07:42.766919    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:07:42.766932    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:07:42.778853    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:07:42.778864    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:07:42.816300    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:07:42.816309    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:07:42.820546    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:07:42.820558    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:07:42.832168    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:07:42.832178    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:07:42.843690    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:07:42.843701    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:07:42.854972    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:07:42.854983    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:07:45.368431    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:07:50.370828    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:07:50.371258    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:07:50.408194    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:07:50.408327    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:07:50.428131    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:07:50.428234    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:07:50.442554    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:07:50.442626    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:07:50.454703    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:07:50.454769    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:07:50.466369    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:07:50.466432    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:07:50.482595    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:07:50.482670    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:07:50.496620    4562 logs.go:276] 0 containers: []
	W0802 11:07:50.496632    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:07:50.496693    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:07:50.507515    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:07:50.507531    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:07:50.507536    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:07:50.533758    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:07:50.533773    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:07:50.572238    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:07:50.572246    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:07:50.584387    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:07:50.584398    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:07:50.597313    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:07:50.597324    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:07:50.608538    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:07:50.608550    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:07:50.620032    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:07:50.620041    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:07:50.632096    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:07:50.632108    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:07:50.647008    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:07:50.647018    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:07:50.651262    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:07:50.651269    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:07:50.662633    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:07:50.662645    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:07:50.697280    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:07:50.697291    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:07:50.711222    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:07:50.711231    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:07:50.722418    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:07:50.722428    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:07:50.748524    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:07:50.748535    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:07:50.759632    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:07:50.759641    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:07:50.771067    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:07:50.771078    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:07:53.286756    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:07:58.289221    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:07:58.289414    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:07:58.301239    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:07:58.301319    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:07:58.312523    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:07:58.312600    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:07:58.323183    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:07:58.323266    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:07:58.335853    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:07:58.335927    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:07:58.346528    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:07:58.346596    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:07:58.358172    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:07:58.358244    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:07:58.369280    4562 logs.go:276] 0 containers: []
	W0802 11:07:58.369290    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:07:58.369349    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:07:58.379701    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:07:58.379719    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:07:58.379724    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:07:58.405280    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:07:58.405290    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:07:58.426045    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:07:58.426057    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:07:58.440489    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:07:58.440499    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:07:58.452210    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:07:58.452222    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:07:58.469964    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:07:58.469978    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:07:58.481610    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:07:58.481620    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:07:58.493405    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:07:58.493416    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:07:58.505141    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:07:58.505151    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:07:58.517577    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:07:58.517588    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:07:58.557847    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:07:58.557855    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:07:58.562592    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:07:58.562598    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:07:58.617629    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:07:58.617641    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:07:58.630683    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:07:58.630694    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:07:58.643698    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:07:58.643708    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:07:58.657336    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:07:58.657347    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:07:58.669085    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:07:58.669098    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:08:01.183284    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:08:06.185423    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:08:06.185892    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:08:06.225665    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:08:06.225807    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:08:06.248543    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:08:06.248654    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:08:06.263707    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:08:06.263784    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:08:06.276804    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:08:06.276885    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:08:06.287696    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:08:06.287765    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:08:06.298355    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:08:06.298425    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:08:06.310712    4562 logs.go:276] 0 containers: []
	W0802 11:08:06.310722    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:08:06.310779    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:08:06.320829    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:08:06.320849    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:08:06.320855    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:08:06.356269    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:08:06.356282    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:08:06.371832    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:08:06.371843    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:08:06.396520    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:08:06.396529    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:08:06.409152    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:08:06.409164    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:08:06.429364    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:08:06.429375    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:08:06.445865    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:08:06.445879    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:08:06.483281    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:08:06.483298    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:08:06.487716    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:08:06.487724    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:08:06.501643    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:08:06.501654    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:08:06.514005    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:08:06.514018    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:08:06.527601    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:08:06.527614    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:08:06.538972    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:08:06.538987    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:08:06.552175    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:08:06.552185    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:08:06.565309    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:08:06.565323    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:08:06.581357    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:08:06.581370    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:08:06.592481    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:08:06.592493    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:08:09.104630    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:08:14.106678    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:08:14.106908    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:08:14.128448    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:08:14.128543    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:08:14.143832    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:08:14.143911    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:08:14.155912    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:08:14.155986    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:08:14.166473    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:08:14.166545    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:08:14.176983    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:08:14.177053    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:08:14.187713    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:08:14.187776    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:08:14.197653    4562 logs.go:276] 0 containers: []
	W0802 11:08:14.197665    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:08:14.197725    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:08:14.208598    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:08:14.208616    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:08:14.208622    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:08:14.220256    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:08:14.220268    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:08:14.245542    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:08:14.245550    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:08:14.257315    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:08:14.257324    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:08:14.294424    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:08:14.294432    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:08:14.298498    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:08:14.298503    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:08:14.309773    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:08:14.309787    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:08:14.321793    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:08:14.321808    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:08:14.335952    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:08:14.335966    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:08:14.347250    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:08:14.347262    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:08:14.360097    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:08:14.360116    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:08:14.379238    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:08:14.379251    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:08:14.391857    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:08:14.391871    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:08:14.403572    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:08:14.403581    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:08:14.423930    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:08:14.423940    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:08:14.439393    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:08:14.439407    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:08:14.478213    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:08:14.478222    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:08:16.991642    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:08:21.993747    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:08:21.993858    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:08:22.005682    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:08:22.005764    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:08:22.017588    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:08:22.017667    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:08:22.031914    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:08:22.031985    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:08:22.043655    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:08:22.043726    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:08:22.055173    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:08:22.055248    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:08:22.071764    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:08:22.071841    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:08:22.082871    4562 logs.go:276] 0 containers: []
	W0802 11:08:22.082883    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:08:22.082945    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:08:22.094452    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:08:22.094470    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:08:22.094476    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:08:22.109042    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:08:22.109053    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:08:22.120970    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:08:22.120982    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:08:22.133805    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:08:22.133817    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:08:22.158645    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:08:22.158666    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:08:22.173171    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:08:22.173183    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:08:22.185901    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:08:22.185913    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:08:22.203920    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:08:22.203931    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:08:22.222618    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:08:22.222634    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:08:22.235565    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:08:22.235581    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:08:22.247265    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:08:22.247276    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:08:22.259242    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:08:22.259254    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:08:22.275654    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:08:22.275666    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:08:22.280334    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:08:22.280342    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:08:22.316088    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:08:22.316099    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:08:22.329575    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:08:22.329585    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:08:22.342047    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:08:22.342060    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:08:24.884982    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:08:29.885963    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:08:29.886158    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:08:29.901221    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:08:29.901290    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:08:29.913563    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:08:29.913638    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:08:29.924088    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:08:29.924156    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:08:29.938899    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:08:29.938967    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:08:29.951411    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:08:29.951475    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:08:29.961721    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:08:29.961790    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:08:29.974336    4562 logs.go:276] 0 containers: []
	W0802 11:08:29.974346    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:08:29.974399    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:08:29.984937    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:08:29.984954    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:08:29.984959    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:08:29.999003    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:08:29.999015    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:08:30.016248    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:08:30.016260    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:08:30.029787    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:08:30.029804    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:08:30.041308    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:08:30.041320    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:08:30.055670    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:08:30.055679    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:08:30.066910    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:08:30.066924    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:08:30.105593    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:08:30.105605    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:08:30.116854    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:08:30.116864    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:08:30.134419    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:08:30.134431    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:08:30.145998    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:08:30.146009    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:08:30.181979    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:08:30.181989    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:08:30.205214    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:08:30.205222    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:08:30.209521    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:08:30.209529    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:08:30.223856    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:08:30.223866    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:08:30.236292    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:08:30.236305    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:08:30.247910    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:08:30.247922    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:08:32.761683    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:08:37.764305    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:08:37.764763    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:08:37.811241    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:08:37.811383    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:08:37.833512    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:08:37.833608    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:08:37.848199    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:08:37.848268    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:08:37.861051    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:08:37.861125    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:08:37.871606    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:08:37.871674    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:08:37.882254    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:08:37.882325    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:08:37.892497    4562 logs.go:276] 0 containers: []
	W0802 11:08:37.892509    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:08:37.892570    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:08:37.903305    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:08:37.903322    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:08:37.903328    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:08:37.944363    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:08:37.944382    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:08:37.980365    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:08:37.980376    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:08:37.993643    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:08:37.993653    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:08:38.007604    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:08:38.007616    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:08:38.021987    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:08:38.021998    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:08:38.035158    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:08:38.035172    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:08:38.046667    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:08:38.046678    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:08:38.058478    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:08:38.058491    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:08:38.063314    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:08:38.063323    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:08:38.079257    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:08:38.079267    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:08:38.090328    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:08:38.090340    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:08:38.114583    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:08:38.114589    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:08:38.126399    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:08:38.126411    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:08:38.139474    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:08:38.139485    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:08:38.156696    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:08:38.156706    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:08:38.168306    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:08:38.168322    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:08:40.681917    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:08:45.684459    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:08:45.684610    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:08:45.696261    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:08:45.696340    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:08:45.707656    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:08:45.707731    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:08:45.718312    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:08:45.718383    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:08:45.733218    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:08:45.733283    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:08:45.743974    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:08:45.744040    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:08:45.754699    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:08:45.754764    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:08:45.777714    4562 logs.go:276] 0 containers: []
	W0802 11:08:45.777727    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:08:45.777785    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:08:45.788402    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:08:45.788418    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:08:45.788424    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:08:45.823170    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:08:45.823182    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:08:45.835634    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:08:45.835644    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:08:45.847233    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:08:45.847245    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:08:45.864139    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:08:45.864151    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:08:45.878208    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:08:45.878221    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:08:45.892211    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:08:45.892220    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:08:45.904273    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:08:45.904285    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:08:45.929010    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:08:45.929018    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:08:45.941050    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:08:45.941060    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:08:45.952377    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:08:45.952388    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:08:45.991009    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:08:45.991017    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:08:45.995105    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:08:45.995111    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:08:46.010328    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:08:46.010340    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:08:46.028963    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:08:46.028974    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:08:46.040834    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:08:46.040844    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:08:46.052224    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:08:46.052236    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:08:48.566252    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:08:53.568333    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:08:53.568515    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:08:53.580907    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:08:53.581010    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:08:53.591842    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:08:53.591922    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:08:53.602741    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:08:53.602812    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:08:53.617887    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:08:53.617959    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:08:53.636147    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:08:53.636218    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:08:53.647335    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:08:53.647401    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:08:53.657870    4562 logs.go:276] 0 containers: []
	W0802 11:08:53.657880    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:08:53.657934    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:08:53.668509    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:08:53.668527    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:08:53.668533    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:08:53.672972    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:08:53.672979    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:08:53.687785    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:08:53.687796    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:08:53.699662    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:08:53.699675    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:08:53.712973    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:08:53.712988    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:08:53.727258    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:08:53.727273    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:08:53.739126    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:08:53.739137    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:08:53.757539    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:08:53.757550    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:08:53.798802    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:08:53.798810    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:08:53.835015    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:08:53.835027    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:08:53.847247    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:08:53.847257    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:08:53.872001    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:08:53.872009    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:08:53.892152    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:08:53.892164    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:08:53.922216    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:08:53.922232    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:08:53.944882    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:08:53.944894    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:08:53.958788    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:08:53.958800    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:08:53.971362    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:08:53.971374    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:08:56.485228    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:01.486686    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:01.486811    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:09:01.497410    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:09:01.497480    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:09:01.508075    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:09:01.508134    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:09:01.520228    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:09:01.520291    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:09:01.531129    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:09:01.531198    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:09:01.542533    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:09:01.542602    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:09:01.553534    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:09:01.553600    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:09:01.564439    4562 logs.go:276] 0 containers: []
	W0802 11:09:01.564450    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:09:01.564507    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:09:01.574893    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:09:01.574908    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:09:01.574913    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:09:01.586710    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:09:01.586721    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:09:01.626055    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:09:01.626072    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:09:01.644154    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:09:01.644166    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:09:01.657924    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:09:01.657938    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:09:01.672292    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:09:01.672306    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:09:01.684676    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:09:01.684689    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:09:01.698858    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:09:01.698871    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:09:01.716507    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:09:01.716524    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:09:01.729074    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:09:01.729085    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:09:01.733289    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:09:01.733295    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:09:01.767966    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:09:01.767977    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:09:01.779577    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:09:01.779591    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:09:01.791330    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:09:01.791340    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:09:01.803136    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:09:01.803152    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:09:01.828671    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:09:01.828681    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:09:01.844482    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:09:01.844496    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:09:04.357081    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:09.358839    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:09.358952    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:09:09.369861    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:09:09.369943    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:09:09.380210    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:09:09.380281    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:09:09.390732    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:09:09.390819    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:09:09.401054    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:09:09.401123    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:09:09.411675    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:09:09.411745    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:09:09.422664    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:09:09.422730    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:09:09.432297    4562 logs.go:276] 0 containers: []
	W0802 11:09:09.432309    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:09:09.432365    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:09:09.443273    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:09:09.443291    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:09:09.443297    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:09:09.448031    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:09:09.448040    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:09:09.470487    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:09:09.470501    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:09:09.482512    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:09:09.482525    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:09:09.503238    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:09:09.503248    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:09:09.522486    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:09:09.522497    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:09:09.535278    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:09:09.535288    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:09:09.546639    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:09:09.546650    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:09:09.582478    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:09:09.582489    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:09:09.606450    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:09:09.606461    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:09:09.618396    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:09:09.618410    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:09:09.630525    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:09:09.630541    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:09:09.643084    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:09:09.643098    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:09:09.654750    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:09:09.654762    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:09:09.668030    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:09:09.668041    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:09:09.681719    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:09:09.681757    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:09:09.723176    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:09:09.723189    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:09:12.239723    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:17.241832    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:17.242049    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:09:17.254305    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:09:17.254378    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:09:17.264901    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:09:17.264963    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:09:17.275901    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:09:17.275972    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:09:17.286530    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:09:17.286597    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:09:17.297010    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:09:17.297084    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:09:17.307796    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:09:17.307865    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:09:17.318308    4562 logs.go:276] 0 containers: []
	W0802 11:09:17.318321    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:09:17.318387    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:09:17.329186    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:09:17.329209    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:09:17.329214    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:09:17.340417    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:09:17.340430    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:09:17.352194    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:09:17.352204    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:09:17.363200    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:09:17.363211    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:09:17.375415    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:09:17.375426    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:09:17.387417    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:09:17.387427    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:09:17.425098    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:09:17.425113    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:09:17.447313    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:09:17.447321    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:09:17.451620    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:09:17.451626    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:09:17.465802    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:09:17.465815    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:09:17.477747    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:09:17.477757    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:09:17.489092    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:09:17.489106    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:09:17.527515    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:09:17.527529    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:09:17.541374    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:09:17.541384    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:09:17.554579    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:09:17.554588    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:09:17.572534    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:09:17.572549    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:09:17.584821    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:09:17.584832    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:09:20.098589    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:25.100652    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:25.100872    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:09:25.118113    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:09:25.118204    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:09:25.132081    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:09:25.132162    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:09:25.147616    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:09:25.147686    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:09:25.157492    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:09:25.157557    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:09:25.167487    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:09:25.167555    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:09:25.177559    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:09:25.177625    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:09:25.187922    4562 logs.go:276] 0 containers: []
	W0802 11:09:25.187933    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:09:25.187995    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:09:25.198751    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:09:25.198767    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:09:25.198772    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:09:25.210416    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:09:25.210428    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:09:25.222227    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:09:25.222239    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:09:25.227079    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:09:25.227085    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:09:25.239128    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:09:25.239139    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:09:25.253014    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:09:25.253024    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:09:25.270998    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:09:25.271009    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:09:25.294997    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:09:25.295004    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:09:25.306081    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:09:25.306093    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:09:25.317649    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:09:25.317661    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:09:25.329621    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:09:25.329631    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:09:25.341394    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:09:25.341403    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:09:25.353599    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:09:25.353613    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:09:25.393160    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:09:25.393167    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:09:25.428710    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:09:25.428721    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:09:25.442409    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:09:25.442420    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:09:25.457420    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:09:25.457430    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:09:27.971708    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:32.972720    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:32.972973    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:09:33.000575    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:09:33.000696    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:09:33.019025    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:09:33.019103    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:09:33.031895    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:09:33.031965    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:09:33.044118    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:09:33.044204    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:09:33.054936    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:09:33.055037    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:09:33.065469    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:09:33.065559    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:09:33.076018    4562 logs.go:276] 0 containers: []
	W0802 11:09:33.076030    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:09:33.076083    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:09:33.086156    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:09:33.086174    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:09:33.086180    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:09:33.098126    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:09:33.098137    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:09:33.112235    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:09:33.112245    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:09:33.123833    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:09:33.123843    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:09:33.135339    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:09:33.135351    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:09:33.153334    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:09:33.153344    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:09:33.165644    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:09:33.165658    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:09:33.190111    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:09:33.190124    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:09:33.229510    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:09:33.229519    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:09:33.265090    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:09:33.265103    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:09:33.278338    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:09:33.278349    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:09:33.292544    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:09:33.292554    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:09:33.311803    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:09:33.311816    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:09:33.323830    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:09:33.323845    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:09:33.328789    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:09:33.328797    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:09:33.343738    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:09:33.343750    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:09:33.356777    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:09:33.356787    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:09:35.870516    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:40.871234    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:40.871575    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:09:40.900489    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:09:40.900652    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:09:40.919209    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:09:40.919292    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:09:40.932602    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:09:40.932674    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:09:40.943938    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:09:40.944010    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:09:40.954303    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:09:40.954377    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:09:40.965132    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:09:40.965202    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:09:40.975009    4562 logs.go:276] 0 containers: []
	W0802 11:09:40.975020    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:09:40.975078    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:09:40.985804    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:09:40.985821    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:09:40.985827    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:09:40.997040    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:09:40.997053    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:09:41.020122    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:09:41.020133    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:09:41.056778    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:09:41.056787    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:09:41.061751    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:09:41.061760    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:09:41.097966    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:09:41.097978    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:09:41.110268    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:09:41.110283    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:09:41.127975    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:09:41.127985    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:09:41.148330    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:09:41.148344    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:09:41.162129    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:09:41.162140    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:09:41.175695    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:09:41.175704    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:09:41.188344    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:09:41.188357    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:09:41.199456    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:09:41.199466    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:09:41.210858    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:09:41.210870    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:09:41.224780    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:09:41.224792    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:09:41.235884    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:09:41.235894    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:09:41.248629    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:09:41.248640    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:09:43.761482    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:48.763612    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:48.763898    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:09:48.797773    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:09:48.797860    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:09:48.826154    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:09:48.826248    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:09:48.843506    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:09:48.843586    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:09:48.855626    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:09:48.855697    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:09:48.866384    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:09:48.866454    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:09:48.876877    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:09:48.876946    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:09:48.886994    4562 logs.go:276] 0 containers: []
	W0802 11:09:48.887006    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:09:48.887076    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:09:48.899154    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:09:48.899172    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:09:48.899179    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:09:48.916810    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:09:48.916821    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:09:48.928843    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:09:48.928852    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:09:48.968078    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:09:48.968088    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:09:48.979792    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:09:48.979805    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:09:48.993762    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:09:48.993774    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:09:49.008179    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:09:49.008190    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:09:49.027200    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:09:49.027212    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:09:49.038587    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:09:49.038602    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:09:49.061277    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:09:49.061288    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:09:49.078988    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:09:49.078999    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:09:49.091014    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:09:49.091025    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:09:49.095280    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:09:49.095288    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:09:49.130382    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:09:49.130393    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:09:49.144206    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:09:49.144215    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:09:49.159527    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:09:49.159538    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:09:49.171010    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:09:49.171020    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:09:51.685553    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:56.688120    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:56.688228    4562 kubeadm.go:597] duration metric: took 4m4.350327625s to restartPrimaryControlPlane
	W0802 11:09:56.688277    4562 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0802 11:09:56.688298    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0802 11:09:57.638134    4562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 11:09:57.643543    4562 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 11:09:57.646827    4562 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 11:09:57.649484    4562 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 11:09:57.649491    4562 kubeadm.go:157] found existing configuration files:
	
	I0802 11:09:57.649514    4562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/admin.conf
	I0802 11:09:57.652563    4562 kubeadm.go:163] "https://control-plane.minikube.internal:50312" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 11:09:57.652590    4562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 11:09:57.655623    4562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/kubelet.conf
	I0802 11:09:57.657906    4562 kubeadm.go:163] "https://control-plane.minikube.internal:50312" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 11:09:57.657924    4562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 11:09:57.660965    4562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/controller-manager.conf
	I0802 11:09:57.663958    4562 kubeadm.go:163] "https://control-plane.minikube.internal:50312" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 11:09:57.663980    4562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 11:09:57.666445    4562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/scheduler.conf
	I0802 11:09:57.669164    4562 kubeadm.go:163] "https://control-plane.minikube.internal:50312" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 11:09:57.669190    4562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 11:09:57.672370    4562 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 11:09:57.689914    4562 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0802 11:09:57.690015    4562 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 11:09:57.741170    4562 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 11:09:57.741234    4562 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 11:09:57.741289    4562 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 11:09:57.789812    4562 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 11:09:57.792974    4562 out.go:204]   - Generating certificates and keys ...
	I0802 11:09:57.793009    4562 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 11:09:57.793046    4562 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 11:09:57.793083    4562 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0802 11:09:57.793110    4562 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0802 11:09:57.793151    4562 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0802 11:09:57.793176    4562 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0802 11:09:57.793218    4562 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0802 11:09:57.793278    4562 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0802 11:09:57.793316    4562 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0802 11:09:57.793367    4562 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0802 11:09:57.793412    4562 kubeadm.go:310] [certs] Using the existing "sa" key
	I0802 11:09:57.793443    4562 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 11:09:58.167783    4562 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 11:09:58.236612    4562 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 11:09:58.328958    4562 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 11:09:58.489125    4562 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 11:09:58.517125    4562 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 11:09:58.517480    4562 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 11:09:58.517510    4562 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 11:09:58.613124    4562 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 11:09:58.617043    4562 out.go:204]   - Booting up control plane ...
	I0802 11:09:58.617095    4562 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 11:09:58.617144    4562 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 11:09:58.617174    4562 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 11:09:58.617236    4562 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 11:09:58.617320    4562 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0802 11:10:03.119588    4562 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502442 seconds
	I0802 11:10:03.119670    4562 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 11:10:03.124351    4562 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 11:10:03.635764    4562 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 11:10:03.635939    4562 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-894000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 11:10:04.143623    4562 kubeadm.go:310] [bootstrap-token] Using token: 76xl6n.zpcdcvfslw7pcvqc
	I0802 11:10:04.149705    4562 out.go:204]   - Configuring RBAC rules ...
	I0802 11:10:04.149758    4562 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 11:10:04.149805    4562 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 11:10:04.153334    4562 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 11:10:04.154412    4562 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 11:10:04.156577    4562 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 11:10:04.157478    4562 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 11:10:04.160542    4562 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 11:10:04.330292    4562 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 11:10:04.547567    4562 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 11:10:04.548050    4562 kubeadm.go:310] 
	I0802 11:10:04.548079    4562 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 11:10:04.548138    4562 kubeadm.go:310] 
	I0802 11:10:04.548172    4562 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 11:10:04.548175    4562 kubeadm.go:310] 
	I0802 11:10:04.548190    4562 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 11:10:04.548223    4562 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 11:10:04.548250    4562 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 11:10:04.548253    4562 kubeadm.go:310] 
	I0802 11:10:04.548334    4562 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 11:10:04.548338    4562 kubeadm.go:310] 
	I0802 11:10:04.548434    4562 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 11:10:04.548437    4562 kubeadm.go:310] 
	I0802 11:10:04.548461    4562 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 11:10:04.548538    4562 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 11:10:04.548614    4562 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 11:10:04.548620    4562 kubeadm.go:310] 
	I0802 11:10:04.548679    4562 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 11:10:04.548776    4562 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 11:10:04.548782    4562 kubeadm.go:310] 
	I0802 11:10:04.548836    4562 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 76xl6n.zpcdcvfslw7pcvqc \
	I0802 11:10:04.548896    4562 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f9320a40b5936daeb22249c1a98fe573be47e358012961e7ff0a8e7d01ac6b4d \
	I0802 11:10:04.548913    4562 kubeadm.go:310] 	--control-plane 
	I0802 11:10:04.548916    4562 kubeadm.go:310] 
	I0802 11:10:04.548984    4562 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 11:10:04.548991    4562 kubeadm.go:310] 
	I0802 11:10:04.549034    4562 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 76xl6n.zpcdcvfslw7pcvqc \
	I0802 11:10:04.549082    4562 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f9320a40b5936daeb22249c1a98fe573be47e358012961e7ff0a8e7d01ac6b4d 
	I0802 11:10:04.549128    4562 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 11:10:04.549138    4562 cni.go:84] Creating CNI manager for ""
	I0802 11:10:04.549146    4562 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:10:04.553306    4562 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 11:10:04.560330    4562 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 11:10:04.563912    4562 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 11:10:04.568712    4562 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 11:10:04.568763    4562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 11:10:04.568806    4562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-894000 minikube.k8s.io/updated_at=2024_08_02T11_10_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=running-upgrade-894000 minikube.k8s.io/primary=true
	I0802 11:10:04.609394    4562 ops.go:34] apiserver oom_adj: -16
	I0802 11:10:04.609408    4562 kubeadm.go:1113] duration metric: took 40.688833ms to wait for elevateKubeSystemPrivileges
	I0802 11:10:04.609433    4562 kubeadm.go:394] duration metric: took 4m12.285540875s to StartCluster
	I0802 11:10:04.609443    4562 settings.go:142] acquiring lock: {Name:mke9d9a6b3c42219545f5aed5860e740f1b28aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:10:04.609539    4562 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:10:04.609925    4562 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/kubeconfig: {Name:mkee875f598bd0a8f78c04f09a48257e74d5dd54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:10:04.610150    4562 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:10:04.610159    4562 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 11:10:04.610195    4562 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-894000"
	I0802 11:10:04.610205    4562 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-894000"
	I0802 11:10:04.610207    4562 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-894000"
	W0802 11:10:04.610208    4562 addons.go:243] addon storage-provisioner should already be in state true
	I0802 11:10:04.610217    4562 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-894000"
	I0802 11:10:04.610223    4562 host.go:66] Checking if "running-upgrade-894000" exists ...
	I0802 11:10:04.610247    4562 config.go:182] Loaded profile config "running-upgrade-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:10:04.611143    4562 kapi.go:59] client config for running-upgrade-894000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/client.key", CAFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103eb81b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0802 11:10:04.611265    4562 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-894000"
	W0802 11:10:04.611270    4562 addons.go:243] addon default-storageclass should already be in state true
	I0802 11:10:04.611277    4562 host.go:66] Checking if "running-upgrade-894000" exists ...
	I0802 11:10:04.613248    4562 out.go:177] * Verifying Kubernetes components...
	I0802 11:10:04.613528    4562 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 11:10:04.617476    4562 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 11:10:04.617484    4562 sshutil.go:53] new ssh client: &{IP:localhost Port:50280 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/running-upgrade-894000/id_rsa Username:docker}
	I0802 11:10:04.621069    4562 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:10:04.624251    4562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:10:04.628303    4562 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 11:10:04.628309    4562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 11:10:04.628314    4562 sshutil.go:53] new ssh client: &{IP:localhost Port:50280 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/running-upgrade-894000/id_rsa Username:docker}
	I0802 11:10:04.717485    4562 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 11:10:04.722541    4562 api_server.go:52] waiting for apiserver process to appear ...
	I0802 11:10:04.722580    4562 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 11:10:04.726458    4562 api_server.go:72] duration metric: took 116.301833ms to wait for apiserver process to appear ...
	I0802 11:10:04.726464    4562 api_server.go:88] waiting for apiserver healthz status ...
	I0802 11:10:04.726470    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:04.732921    4562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 11:10:04.756215    4562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 11:10:09.728469    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:09.728536    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:14.728737    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:14.728776    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:19.729072    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:19.729116    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:24.729529    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:24.729580    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:29.730083    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:29.730108    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:34.730743    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:34.730782    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0802 11:10:35.073471    4562 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0802 11:10:35.078993    4562 out.go:177] * Enabled addons: storage-provisioner
	I0802 11:10:35.086920    4562 addons.go:510] duration metric: took 30.47783375s for enable addons: enabled=[storage-provisioner]
	I0802 11:10:39.731719    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:39.731768    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:44.733132    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:44.733180    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:49.734878    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:49.734906    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:54.735646    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:54.735673    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:59.737669    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:59.737710    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:04.739884    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:04.740064    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:04.758912    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:11:04.758991    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:04.783595    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:11:04.783664    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:04.795045    4562 logs.go:276] 2 containers: [1fbb8e62e165 e2699333b635]
	I0802 11:11:04.795123    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:04.805774    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:11:04.805835    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:04.816458    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:11:04.816518    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:04.826947    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:11:04.827016    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:04.837958    4562 logs.go:276] 0 containers: []
	W0802 11:11:04.837971    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:04.838027    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:04.848493    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:11:04.848508    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:11:04.848514    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:11:04.860215    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:11:04.860229    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:11:04.877658    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:04.877669    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:04.902828    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:11:04.902839    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:11:04.917053    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:11:04.917064    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:11:04.930976    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:11:04.930990    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:11:04.942457    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:11:04.942468    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:11:04.957997    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:11:04.958007    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:11:04.969706    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:11:04.969716    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:04.981154    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:04.981164    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:05.015976    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:05.015987    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:05.020374    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:05.020383    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:05.062782    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:11:05.062793    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:11:07.579106    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:12.581225    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:12.581569    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:12.610201    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:11:12.610343    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:12.632362    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:11:12.632473    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:12.645373    4562 logs.go:276] 2 containers: [1fbb8e62e165 e2699333b635]
	I0802 11:11:12.645440    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:12.656812    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:11:12.656882    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:12.674617    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:11:12.674686    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:12.685410    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:11:12.685474    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:12.695589    4562 logs.go:276] 0 containers: []
	W0802 11:11:12.695601    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:12.695659    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:12.707231    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:11:12.707246    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:11:12.707251    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:12.718922    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:12.718933    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:12.754307    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:12.754317    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:12.759008    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:11:12.759014    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:11:12.776633    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:11:12.776642    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:11:12.787966    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:11:12.787978    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:11:12.803455    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:11:12.803466    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:11:12.826193    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:12.826203    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:12.863415    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:11:12.863431    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:11:12.880338    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:11:12.880351    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:11:12.892102    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:11:12.892112    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:11:12.903996    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:11:12.904008    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:11:12.915328    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:12.915336    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:15.441565    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:20.443745    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:20.443873    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:20.456356    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:11:20.456432    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:20.467607    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:11:20.467671    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:20.478048    4562 logs.go:276] 2 containers: [1fbb8e62e165 e2699333b635]
	I0802 11:11:20.478120    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:20.488881    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:11:20.488947    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:20.500903    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:11:20.500978    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:20.511302    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:11:20.511368    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:20.521987    4562 logs.go:276] 0 containers: []
	W0802 11:11:20.522003    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:20.522064    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:20.534761    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:11:20.534775    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:11:20.534782    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:20.545987    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:11:20.546000    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:11:20.563970    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:11:20.563980    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:11:20.577433    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:11:20.577446    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:11:20.589210    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:20.589224    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:20.614846    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:11:20.614854    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:11:20.629888    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:11:20.629899    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:11:20.642141    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:11:20.642151    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:11:20.663276    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:11:20.663286    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:11:20.674671    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:20.674681    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:20.709257    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:20.709265    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:20.714123    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:20.714131    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:20.748997    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:11:20.749007    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:11:23.263316    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:28.265774    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:28.266004    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:28.287107    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:11:28.287205    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:28.304320    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:11:28.304399    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:28.316029    4562 logs.go:276] 2 containers: [1fbb8e62e165 e2699333b635]
	I0802 11:11:28.316102    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:28.326929    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:11:28.326996    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:28.337467    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:11:28.337544    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:28.348219    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:11:28.348295    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:28.359244    4562 logs.go:276] 0 containers: []
	W0802 11:11:28.359256    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:28.359318    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:28.370207    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:11:28.370222    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:11:28.370228    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:28.381542    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:28.381555    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:28.415402    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:11:28.415412    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:11:28.431365    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:11:28.431375    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:11:28.447172    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:11:28.447182    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:11:28.458663    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:11:28.458674    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:11:28.470083    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:11:28.470094    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:11:28.487001    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:28.487016    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:28.510184    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:28.510195    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:28.514821    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:28.514828    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:28.549658    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:11:28.549672    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:11:28.560893    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:11:28.560903    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:11:28.579018    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:11:28.579029    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:11:31.092584    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:36.094775    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:36.095086    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:36.117993    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:11:36.118131    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:36.134091    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:11:36.134165    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:36.151371    4562 logs.go:276] 2 containers: [1fbb8e62e165 e2699333b635]
	I0802 11:11:36.151435    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:36.162130    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:11:36.162198    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:36.173559    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:11:36.173630    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:36.184832    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:11:36.184900    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:36.195788    4562 logs.go:276] 0 containers: []
	W0802 11:11:36.195800    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:36.195860    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:36.211336    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:11:36.211351    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:36.211357    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:36.244097    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:11:36.244105    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:11:36.258678    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:11:36.258688    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:11:36.271017    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:11:36.271031    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:11:36.298442    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:11:36.298453    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:36.310935    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:36.310951    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:36.315368    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:36.315377    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:36.354168    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:11:36.354181    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:11:36.369264    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:11:36.369278    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:11:36.382125    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:11:36.382137    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:11:36.401239    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:11:36.401250    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:11:36.417300    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:11:36.417313    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:11:36.431213    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:36.431227    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:38.957803    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:43.960172    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:43.960337    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:43.974239    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:11:43.974318    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:43.986606    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:11:43.986669    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:43.997917    4562 logs.go:276] 2 containers: [1fbb8e62e165 e2699333b635]
	I0802 11:11:43.997991    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:44.009286    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:11:44.009347    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:44.019767    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:11:44.019837    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:44.030691    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:11:44.030759    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:44.041563    4562 logs.go:276] 0 containers: []
	W0802 11:11:44.041574    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:44.041634    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:44.052328    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:11:44.052344    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:11:44.052349    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:11:44.068169    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:11:44.068179    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:11:44.092413    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:44.092424    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:44.117428    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:44.117437    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:44.152246    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:44.152258    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:44.191897    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:11:44.191910    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:11:44.207127    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:11:44.207140    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:11:44.219537    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:11:44.219550    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:11:44.231641    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:11:44.231653    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:11:44.255252    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:11:44.255262    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:44.268523    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:44.268532    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:44.273111    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:11:44.273118    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:11:44.289434    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:11:44.289444    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:11:46.802408    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:51.804607    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:51.804791    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:51.818272    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:11:51.818348    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:51.829774    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:11:51.829837    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:51.840811    4562 logs.go:276] 2 containers: [1fbb8e62e165 e2699333b635]
	I0802 11:11:51.840874    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:51.852122    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:11:51.852194    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:51.863366    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:11:51.863435    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:51.874414    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:11:51.874477    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:51.886787    4562 logs.go:276] 0 containers: []
	W0802 11:11:51.886800    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:51.886858    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:51.898371    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:11:51.898384    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:51.898390    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:51.933689    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:11:51.933700    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:11:51.949061    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:11:51.949074    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:11:51.964486    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:11:51.964499    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:11:51.976753    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:11:51.976765    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:11:51.994258    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:11:51.994270    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:11:52.006140    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:52.006152    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:52.029155    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:52.029163    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:52.033751    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:11:52.033757    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:11:52.048233    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:11:52.048245    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:11:52.060447    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:11:52.060459    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:11:52.072570    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:11:52.072583    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:52.084545    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:52.084558    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:54.618694    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:59.620817    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:59.621021    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:59.640205    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:11:59.640279    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:59.653535    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:11:59.653612    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:59.665587    4562 logs.go:276] 2 containers: [1fbb8e62e165 e2699333b635]
	I0802 11:11:59.665649    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:59.677593    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:11:59.677666    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:59.688267    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:11:59.688333    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:59.699311    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:11:59.699372    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:59.710252    4562 logs.go:276] 0 containers: []
	W0802 11:11:59.710261    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:59.710322    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:59.721835    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:11:59.721855    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:59.721860    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:59.755147    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:59.755155    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:59.759353    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:11:59.759362    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:11:59.773803    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:11:59.773818    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:11:59.787133    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:11:59.787143    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:11:59.803159    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:59.803171    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:59.827861    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:11:59.827870    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:59.840101    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:59.840111    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:59.876589    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:11:59.876603    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:11:59.891942    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:11:59.891952    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:11:59.904700    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:11:59.904710    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:11:59.917412    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:11:59.917426    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:11:59.936170    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:11:59.936180    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:12:02.450120    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:07.452229    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:07.452409    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:07.469722    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:12:07.469807    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:07.484741    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:12:07.484811    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:07.496191    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:12:07.496264    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:07.507259    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:12:07.507329    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:07.518795    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:12:07.518866    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:07.529823    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:12:07.529888    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:07.540955    4562 logs.go:276] 0 containers: []
	W0802 11:12:07.540967    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:07.541029    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:07.552409    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:12:07.552428    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:07.552434    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:07.588341    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:12:07.588353    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:12:07.603216    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:12:07.603231    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:12:07.614856    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:12:07.614867    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:12:07.629230    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:12:07.629243    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:12:07.647246    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:07.647255    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:07.652364    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:12:07.652373    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:12:07.667778    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:07.667788    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:07.702553    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:12:07.702561    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:12:07.716765    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:12:07.716775    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:12:07.728174    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:12:07.728188    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:12:07.739858    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:12:07.739867    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:12:07.751988    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:07.751997    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:07.777295    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:12:07.777303    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:12:07.791357    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:12:07.791368    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:10.307891    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:15.309995    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:15.310183    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:15.328908    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:12:15.328995    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:15.343821    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:12:15.343902    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:15.355812    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:12:15.355886    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:15.366960    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:12:15.367032    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:15.377920    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:12:15.377993    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:15.389920    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:12:15.389994    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:15.400681    4562 logs.go:276] 0 containers: []
	W0802 11:12:15.400694    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:15.400755    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:15.411202    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:12:15.411221    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:12:15.411225    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:12:15.422685    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:12:15.422698    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:12:15.437214    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:15.437224    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:15.460486    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:12:15.460496    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:12:15.474933    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:12:15.474944    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:12:15.486423    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:12:15.486438    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:12:15.504215    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:15.504225    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:15.538797    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:12:15.538811    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:12:15.550481    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:12:15.550491    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:12:15.561937    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:12:15.561948    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:12:15.573158    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:15.573169    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:15.606772    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:15.606780    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:15.611342    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:12:15.611350    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:15.622888    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:12:15.622902    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:12:15.638432    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:12:15.638442    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:12:18.155276    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:23.157071    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:23.157296    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:23.182554    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:12:23.182658    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:23.197660    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:12:23.197740    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:23.212072    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:12:23.212145    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:23.223508    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:12:23.223582    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:23.234154    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:12:23.234215    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:23.244455    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:12:23.244517    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:23.254625    4562 logs.go:276] 0 containers: []
	W0802 11:12:23.254639    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:23.254704    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:23.264965    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:12:23.264982    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:12:23.264987    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:12:23.276699    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:12:23.276707    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:12:23.294960    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:12:23.294971    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:23.307451    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:12:23.307465    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:12:23.319183    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:12:23.319196    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:12:23.338437    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:23.338447    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:23.373890    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:23.373907    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:23.410177    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:12:23.410188    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:12:23.425401    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:12:23.425412    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:12:23.436530    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:23.436542    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:23.459736    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:12:23.459745    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:12:23.478004    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:12:23.478014    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:12:23.489355    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:23.489365    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:23.493736    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:12:23.493742    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:12:23.507440    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:12:23.507448    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:12:26.021402    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:31.023924    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:31.024145    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:31.052889    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:12:31.052988    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:31.068246    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:12:31.068315    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:31.081162    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:12:31.081242    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:31.094527    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:12:31.094598    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:31.104918    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:12:31.104988    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:31.115238    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:12:31.115299    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:31.125484    4562 logs.go:276] 0 containers: []
	W0802 11:12:31.125498    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:31.125558    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:31.135933    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:12:31.135949    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:12:31.135953    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:12:31.147840    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:12:31.147851    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:12:31.159717    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:12:31.159726    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:31.171319    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:31.171330    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:31.204505    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:31.204511    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:31.208891    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:12:31.208899    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:12:31.223757    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:12:31.223770    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:12:31.235348    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:31.235361    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:31.272405    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:12:31.272418    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:12:31.286455    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:12:31.286466    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:12:31.298578    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:12:31.298588    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:12:31.310582    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:12:31.310592    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:12:31.328754    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:31.328766    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:31.354374    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:12:31.354385    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:12:31.370010    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:12:31.370023    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:12:33.886383    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:38.888501    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:38.888721    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:38.905417    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:12:38.905509    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:38.918479    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:12:38.918553    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:38.929537    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:12:38.929613    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:38.939889    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:12:38.939953    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:38.950027    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:12:38.950090    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:38.960516    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:12:38.960585    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:38.970926    4562 logs.go:276] 0 containers: []
	W0802 11:12:38.970937    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:38.970994    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:38.981316    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:12:38.981333    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:38.981339    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:39.016375    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:12:39.016387    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:12:39.027857    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:12:39.027868    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:12:39.040074    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:12:39.040085    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:12:39.057623    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:12:39.057635    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:12:39.069050    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:39.069062    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:39.073377    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:12:39.073386    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:39.084448    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:12:39.084460    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:12:39.099577    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:12:39.099587    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:12:39.114253    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:39.114267    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:39.140267    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:39.140275    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:39.174256    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:12:39.174263    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:12:39.188292    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:12:39.188300    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:12:39.203521    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:12:39.203529    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:12:39.215782    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:12:39.215793    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:12:41.729607    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:46.731521    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:46.731772    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:46.758182    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:12:46.758303    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:46.776717    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:12:46.776812    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:46.790603    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:12:46.790678    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:46.802702    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:12:46.802769    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:46.813161    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:12:46.813232    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:46.824623    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:12:46.824697    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:46.836415    4562 logs.go:276] 0 containers: []
	W0802 11:12:46.836426    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:46.836491    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:46.849794    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:12:46.849812    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:12:46.849818    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:12:46.861514    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:12:46.861530    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:12:46.878789    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:46.878799    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:46.914063    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:46.914075    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:46.981096    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:12:46.981106    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:12:46.995884    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:12:46.995895    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:12:47.015097    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:12:47.015109    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:12:47.027753    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:47.027764    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:47.032097    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:12:47.032105    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:12:47.044009    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:12:47.044019    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:12:47.055091    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:12:47.055102    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:12:47.067281    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:12:47.067293    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:12:47.086286    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:12:47.086296    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:47.100358    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:12:47.100367    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:12:47.112118    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:47.112129    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:49.639028    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:54.641166    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:54.641323    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:54.652233    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:12:54.652309    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:54.663067    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:12:54.663137    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:54.673476    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:12:54.673547    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:54.684079    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:12:54.684146    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:54.694607    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:12:54.694675    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:54.705375    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:12:54.705442    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:54.716428    4562 logs.go:276] 0 containers: []
	W0802 11:12:54.716443    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:54.716510    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:54.726265    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:12:54.726282    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:12:54.726288    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:12:54.743858    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:54.743870    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:54.767289    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:54.767296    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:54.771650    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:12:54.771659    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:12:54.784130    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:12:54.784140    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:12:54.796401    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:12:54.796414    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:12:54.808116    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:12:54.808132    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:54.819315    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:12:54.819324    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:12:54.831119    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:12:54.831130    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:12:54.845881    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:54.845890    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:54.883042    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:12:54.883054    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:12:54.897703    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:12:54.897717    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:12:54.909125    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:54.909136    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:54.943578    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:12:54.943585    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:12:54.957305    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:12:54.957316    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:12:57.470347    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:02.472508    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:02.472626    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:02.484751    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:13:02.484831    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:02.495353    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:13:02.495419    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:02.509629    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:13:02.509724    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:02.520064    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:13:02.520134    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:02.530847    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:13:02.530916    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:02.541419    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:13:02.541482    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:02.551786    4562 logs.go:276] 0 containers: []
	W0802 11:13:02.551799    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:02.551853    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:02.562020    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:13:02.562036    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:02.562041    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:02.567003    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:13:02.567020    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:13:02.578654    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:13:02.578668    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:02.590754    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:02.590766    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:02.627194    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:02.627207    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:02.682336    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:13:02.682349    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:13:02.694420    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:13:02.694433    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:13:02.713700    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:13:02.713712    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:13:02.731391    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:13:02.731402    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:13:02.746321    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:13:02.746332    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:13:02.758952    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:13:02.758966    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:13:02.770696    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:13:02.770706    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:13:02.782935    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:13:02.782947    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:13:02.795492    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:02.795502    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:02.821563    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:13:02.821575    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:13:05.338000    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:10.340138    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:10.340324    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:10.355626    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:13:10.355709    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:10.366477    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:13:10.366556    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:10.377713    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:13:10.377795    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:10.388295    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:13:10.388365    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:10.398343    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:13:10.398413    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:10.408879    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:13:10.408945    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:10.421005    4562 logs.go:276] 0 containers: []
	W0802 11:13:10.421016    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:10.421080    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:10.431607    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:13:10.431625    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:13:10.431630    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:13:10.443548    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:13:10.443560    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:13:10.455042    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:13:10.455053    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:13:10.466485    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:13:10.466499    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:10.478288    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:10.478299    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:10.482985    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:13:10.482995    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:13:10.496380    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:13:10.496393    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:13:10.509451    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:10.509461    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:10.534733    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:10.534741    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:10.568939    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:13:10.568948    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:13:10.583046    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:13:10.583058    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:13:10.595397    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:13:10.595406    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:13:10.612782    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:10.612792    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:10.651809    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:13:10.651820    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:13:10.665056    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:13:10.665067    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:13:13.184737    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:18.185742    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:18.185849    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:18.197552    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:13:18.197627    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:18.209559    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:13:18.209637    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:18.227701    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:13:18.227774    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:18.243514    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:13:18.243586    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:18.254360    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:13:18.254432    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:18.266071    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:13:18.266140    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:18.276751    4562 logs.go:276] 0 containers: []
	W0802 11:13:18.276762    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:18.276830    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:18.290097    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:13:18.290114    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:18.290120    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:18.326153    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:18.326170    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:18.361743    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:13:18.361754    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:13:18.375936    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:13:18.375946    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:13:18.387334    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:13:18.387345    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:13:18.399696    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:13:18.399706    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:18.419195    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:13:18.419206    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:13:18.434984    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:13:18.434994    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:13:18.450136    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:13:18.450147    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:13:18.462313    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:13:18.462323    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:13:18.476639    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:13:18.476654    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:13:18.488770    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:13:18.488781    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:13:18.500961    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:13:18.500972    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:13:18.519649    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:18.519663    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:18.544431    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:18.544442    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:21.050785    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:26.053294    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:26.053742    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:26.094168    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:13:26.094307    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:26.115591    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:13:26.115685    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:26.130215    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:13:26.130297    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:26.142461    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:13:26.142537    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:26.153829    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:13:26.153899    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:26.164849    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:13:26.164920    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:26.175182    4562 logs.go:276] 0 containers: []
	W0802 11:13:26.175193    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:26.175250    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:26.185487    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:13:26.185505    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:26.185510    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:26.219081    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:13:26.219089    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:13:26.233341    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:13:26.233354    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:13:26.245461    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:26.245475    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:26.250205    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:26.250212    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:26.286222    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:13:26.286237    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:13:26.298333    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:13:26.298345    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:13:26.310479    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:13:26.310493    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:26.322789    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:13:26.322800    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:13:26.336536    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:13:26.336546    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:13:26.351629    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:13:26.351641    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:13:26.367888    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:13:26.367899    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:13:26.382643    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:13:26.382658    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:13:26.404693    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:13:26.404707    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:13:26.423010    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:26.423024    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:28.947996    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:33.950111    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:33.950210    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:33.961889    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:13:33.961962    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:33.973092    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:13:33.973162    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:33.985364    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:13:33.985439    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:33.996900    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:13:33.996977    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:34.007829    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:13:34.007901    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:34.023614    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:13:34.023685    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:34.034667    4562 logs.go:276] 0 containers: []
	W0802 11:13:34.034677    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:34.034735    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:34.045787    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:13:34.045806    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:13:34.045812    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:13:34.059404    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:13:34.059417    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:13:34.078224    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:34.078235    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:34.083394    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:13:34.083407    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:13:34.095429    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:13:34.095440    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:34.107070    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:34.107084    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:34.146352    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:13:34.146366    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:13:34.158591    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:13:34.158602    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:13:34.173887    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:13:34.173900    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:13:34.187628    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:13:34.187641    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:13:34.199115    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:13:34.199128    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:13:34.215256    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:13:34.215270    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:13:34.226524    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:34.226537    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:34.249629    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:34.249638    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:34.283462    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:13:34.283472    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:13:36.799488    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:41.801526    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:41.801694    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:41.813776    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:13:41.813855    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:41.824653    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:13:41.824727    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:41.835500    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:13:41.835578    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:41.859784    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:13:41.859863    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:41.874735    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:13:41.874812    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:41.885486    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:13:41.885557    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:41.896317    4562 logs.go:276] 0 containers: []
	W0802 11:13:41.896328    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:41.896390    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:41.906772    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:13:41.906791    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:13:41.906796    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:13:41.920842    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:13:41.920855    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:13:41.933263    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:13:41.933274    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:13:41.945949    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:13:41.945958    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:13:41.957783    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:41.957798    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:41.962121    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:41.962131    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:41.996600    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:13:41.996614    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:13:42.017327    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:13:42.017339    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:13:42.029368    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:13:42.029379    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:13:42.040955    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:13:42.040967    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:13:42.056886    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:13:42.056897    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:13:42.068975    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:42.068984    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:42.092433    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:13:42.092442    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:42.104251    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:42.104265    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:42.139863    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:13:42.139873    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:13:44.656551    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:49.658622    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:49.658746    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:49.670172    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:13:49.670256    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:49.681527    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:13:49.681597    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:49.692106    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:13:49.692206    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:49.702992    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:13:49.703059    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:49.713071    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:13:49.713142    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:49.724492    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:13:49.724565    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:49.734869    4562 logs.go:276] 0 containers: []
	W0802 11:13:49.734879    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:49.734942    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:49.747349    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:13:49.747366    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:13:49.747371    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:13:49.764818    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:13:49.764831    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:13:49.780379    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:13:49.780390    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:13:49.792073    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:13:49.792084    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:13:49.803619    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:13:49.803629    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:49.815724    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:49.815739    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:49.849777    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:13:49.849785    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:13:49.868859    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:13:49.868868    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:13:49.887779    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:49.887789    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:49.892598    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:13:49.892604    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:13:49.904883    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:13:49.904894    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:13:49.916726    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:49.916737    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:49.940989    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:13:49.941003    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:13:49.959954    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:49.959965    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:50.001085    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:13:50.001094    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:13:52.514818    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:57.516955    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:57.517154    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:57.533835    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:13:57.533927    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:57.546283    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:13:57.546361    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:57.559988    4562 logs.go:276] 4 containers: [294ca712bac3 333afebe2486 2ef39923a680 40a7e5e7fb55]
	I0802 11:13:57.560059    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:57.570835    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:13:57.570908    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:57.581442    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:13:57.581526    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:57.592132    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:13:57.592199    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:57.602951    4562 logs.go:276] 0 containers: []
	W0802 11:13:57.602962    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:57.603021    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:57.612947    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:13:57.612963    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:13:57.612969    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:57.632499    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:13:57.632510    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:13:57.650519    4562 logs.go:123] Gathering logs for coredns [333afebe2486] ...
	I0802 11:13:57.650529    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333afebe2486"
	I0802 11:13:57.662394    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:13:57.662405    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:13:57.674757    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:13:57.674767    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:13:57.686482    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:57.686494    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:57.722372    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:57.722382    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:57.726746    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:13:57.726753    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:13:57.741849    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:13:57.741860    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:13:57.756568    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:13:57.756582    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:13:57.772052    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:57.772065    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:57.795849    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:57.795869    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:57.829542    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:13:57.829551    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:13:57.843309    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:13:57.843319    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:13:57.855973    4562 logs.go:123] Gathering logs for coredns [294ca712bac3] ...
	I0802 11:13:57.855982    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 294ca712bac3"
	I0802 11:14:00.371451    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:14:05.373588    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:14:05.378299    4562 out.go:177] 
	W0802 11:14:05.381186    4562 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0802 11:14:05.381196    4562 out.go:239] * 
	* 
	W0802 11:14:05.381962    4562 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:14:05.397105    4562 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-894000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-02 11:14:05.492689 -0700 PDT m=+2917.024103917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-894000 -n running-upgrade-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-894000 -n running-upgrade-894000: exit status 2 (15.73110725s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-894000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-329000          | force-systemd-flag-329000 | jenkins | v1.33.1 | 02 Aug 24 11:04 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-500000              | force-systemd-env-500000  | jenkins | v1.33.1 | 02 Aug 24 11:04 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-500000           | force-systemd-env-500000  | jenkins | v1.33.1 | 02 Aug 24 11:04 PDT | 02 Aug 24 11:04 PDT |
	| start   | -p docker-flags-256000                | docker-flags-256000       | jenkins | v1.33.1 | 02 Aug 24 11:04 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-329000             | force-systemd-flag-329000 | jenkins | v1.33.1 | 02 Aug 24 11:04 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-329000          | force-systemd-flag-329000 | jenkins | v1.33.1 | 02 Aug 24 11:04 PDT | 02 Aug 24 11:04 PDT |
	| start   | -p cert-expiration-630000             | cert-expiration-630000    | jenkins | v1.33.1 | 02 Aug 24 11:04 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-256000 ssh               | docker-flags-256000       | jenkins | v1.33.1 | 02 Aug 24 11:04 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-256000 ssh               | docker-flags-256000       | jenkins | v1.33.1 | 02 Aug 24 11:04 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-256000                | docker-flags-256000       | jenkins | v1.33.1 | 02 Aug 24 11:04 PDT | 02 Aug 24 11:04 PDT |
	| start   | -p cert-options-479000                | cert-options-479000       | jenkins | v1.33.1 | 02 Aug 24 11:04 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-479000 ssh               | cert-options-479000       | jenkins | v1.33.1 | 02 Aug 24 11:04 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-479000 -- sudo        | cert-options-479000       | jenkins | v1.33.1 | 02 Aug 24 11:04 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-479000                | cert-options-479000       | jenkins | v1.33.1 | 02 Aug 24 11:04 PDT | 02 Aug 24 11:04 PDT |
	| start   | -p running-upgrade-894000             | minikube                  | jenkins | v1.26.0 | 02 Aug 24 11:04 PDT | 02 Aug 24 11:05 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-894000             | running-upgrade-894000    | jenkins | v1.33.1 | 02 Aug 24 11:05 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-630000             | cert-expiration-630000    | jenkins | v1.33.1 | 02 Aug 24 11:07 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-630000             | cert-expiration-630000    | jenkins | v1.33.1 | 02 Aug 24 11:07 PDT | 02 Aug 24 11:07 PDT |
	| start   | -p kubernetes-upgrade-226000          | kubernetes-upgrade-226000 | jenkins | v1.33.1 | 02 Aug 24 11:07 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-226000          | kubernetes-upgrade-226000 | jenkins | v1.33.1 | 02 Aug 24 11:07 PDT | 02 Aug 24 11:07 PDT |
	| start   | -p kubernetes-upgrade-226000          | kubernetes-upgrade-226000 | jenkins | v1.33.1 | 02 Aug 24 11:07 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-226000          | kubernetes-upgrade-226000 | jenkins | v1.33.1 | 02 Aug 24 11:07 PDT | 02 Aug 24 11:07 PDT |
	| start   | -p stopped-upgrade-387000             | minikube                  | jenkins | v1.26.0 | 02 Aug 24 11:07 PDT | 02 Aug 24 11:08 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-387000 stop           | minikube                  | jenkins | v1.26.0 | 02 Aug 24 11:08 PDT | 02 Aug 24 11:08 PDT |
	| start   | -p stopped-upgrade-387000             | stopped-upgrade-387000    | jenkins | v1.33.1 | 02 Aug 24 11:08 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 11:08:39
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 11:08:39.863396    4699 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:08:39.863596    4699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:08:39.863600    4699 out.go:304] Setting ErrFile to fd 2...
	I0802 11:08:39.863603    4699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:08:39.863784    4699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:08:39.864957    4699 out.go:298] Setting JSON to false
	I0802 11:08:39.883943    4699 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4083,"bootTime":1722618036,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:08:39.884012    4699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:08:39.888960    4699 out.go:177] * [stopped-upgrade-387000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:08:39.896959    4699 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:08:39.897017    4699 notify.go:220] Checking for updates...
	I0802 11:08:39.904910    4699 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:08:39.907978    4699 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:08:39.909426    4699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:08:39.916942    4699 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:08:39.920829    4699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:08:39.924193    4699 config.go:182] Loaded profile config "stopped-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:08:39.926952    4699 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0802 11:08:39.929961    4699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:08:39.933918    4699 out.go:177] * Using the qemu2 driver based on existing profile
	I0802 11:08:39.940892    4699 start.go:297] selected driver: qemu2
	I0802 11:08:39.940897    4699 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-387000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0802 11:08:39.940939    4699 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:08:39.943705    4699 cni.go:84] Creating CNI manager for ""
	I0802 11:08:39.943721    4699 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:08:39.943775    4699 start.go:340] cluster config:
	{Name:stopped-upgrade-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-387000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0802 11:08:39.943827    4699 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:08:39.950887    4699 out.go:177] * Starting "stopped-upgrade-387000" primary control-plane node in "stopped-upgrade-387000" cluster
	I0802 11:08:39.954895    4699 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0802 11:08:39.954910    4699 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0802 11:08:39.954919    4699 cache.go:56] Caching tarball of preloaded images
	I0802 11:08:39.955012    4699 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:08:39.955017    4699 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0802 11:08:39.955085    4699 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/config.json ...
	I0802 11:08:39.955405    4699 start.go:360] acquireMachinesLock for stopped-upgrade-387000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:08:39.955435    4699 start.go:364] duration metric: took 22.042µs to acquireMachinesLock for "stopped-upgrade-387000"
	I0802 11:08:39.955442    4699 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:08:39.955448    4699 fix.go:54] fixHost starting: 
	I0802 11:08:39.955556    4699 fix.go:112] recreateIfNeeded on stopped-upgrade-387000: state=Stopped err=<nil>
	W0802 11:08:39.955564    4699 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:08:39.963877    4699 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-387000" ...
	I0802 11:08:37.764305    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:08:37.764763    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:08:37.811241    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:08:37.811383    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:08:37.833512    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:08:37.833608    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:08:37.848199    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:08:37.848268    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:08:37.861051    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:08:37.861125    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:08:37.871606    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:08:37.871674    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:08:37.882254    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:08:37.882325    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:08:37.892497    4562 logs.go:276] 0 containers: []
	W0802 11:08:37.892509    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:08:37.892570    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:08:37.903305    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:08:37.903322    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:08:37.903328    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:08:37.944363    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:08:37.944382    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:08:37.980365    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:08:37.980376    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:08:37.993643    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:08:37.993653    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:08:38.007604    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:08:38.007616    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:08:38.021987    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:08:38.021998    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:08:38.035158    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:08:38.035172    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:08:38.046667    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:08:38.046678    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:08:38.058478    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:08:38.058491    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:08:38.063314    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:08:38.063323    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:08:38.079257    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:08:38.079267    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:08:38.090328    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:08:38.090340    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:08:38.114583    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:08:38.114589    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:08:38.126399    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:08:38.126411    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:08:38.139474    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:08:38.139485    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:08:38.156696    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:08:38.156706    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:08:38.168306    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:08:38.168322    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:08:39.967905    4699 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:08:39.967986    4699 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50471-:22,hostfwd=tcp::50472-:2376,hostname=stopped-upgrade-387000 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/disk.qcow2
	I0802 11:08:40.015345    4699 main.go:141] libmachine: STDOUT: 
	I0802 11:08:40.015379    4699 main.go:141] libmachine: STDERR: 
	I0802 11:08:40.015385    4699 main.go:141] libmachine: Waiting for VM to start (ssh -p 50471 docker@127.0.0.1)...
	I0802 11:08:40.681917    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:08:45.684459    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:08:45.684610    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:08:45.696261    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:08:45.696340    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:08:45.707656    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:08:45.707731    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:08:45.718312    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:08:45.718383    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:08:45.733218    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:08:45.733283    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:08:45.743974    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:08:45.744040    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:08:45.754699    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:08:45.754764    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:08:45.777714    4562 logs.go:276] 0 containers: []
	W0802 11:08:45.777727    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:08:45.777785    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:08:45.788402    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:08:45.788418    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:08:45.788424    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:08:45.823170    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:08:45.823182    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:08:45.835634    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:08:45.835644    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:08:45.847233    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:08:45.847245    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:08:45.864139    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:08:45.864151    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:08:45.878208    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:08:45.878221    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:08:45.892211    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:08:45.892220    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:08:45.904273    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:08:45.904285    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:08:45.929010    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:08:45.929018    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:08:45.941050    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:08:45.941060    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:08:45.952377    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:08:45.952388    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:08:45.991009    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:08:45.991017    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:08:45.995105    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:08:45.995111    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:08:46.010328    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:08:46.010340    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:08:46.028963    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:08:46.028974    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:08:46.040834    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:08:46.040844    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:08:46.052224    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:08:46.052236    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:08:48.566252    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:08:53.568333    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:08:53.568515    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:08:53.580907    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:08:53.581010    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:08:53.591842    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:08:53.591922    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:08:53.602741    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:08:53.602812    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:08:53.617887    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:08:53.617959    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:08:53.636147    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:08:53.636218    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:08:53.647335    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:08:53.647401    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:08:53.657870    4562 logs.go:276] 0 containers: []
	W0802 11:08:53.657880    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:08:53.657934    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:08:53.668509    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:08:53.668527    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:08:53.668533    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:08:53.672972    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:08:53.672979    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:08:53.687785    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:08:53.687796    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:08:53.699662    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:08:53.699675    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:08:53.712973    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:08:53.712988    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:08:53.727258    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:08:53.727273    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:08:53.739126    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:08:53.739137    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:08:53.757539    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:08:53.757550    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:08:53.798802    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:08:53.798810    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:08:53.835015    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:08:53.835027    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:08:53.847247    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:08:53.847257    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:08:53.872001    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:08:53.872009    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:08:53.892152    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:08:53.892164    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:08:53.922216    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:08:53.922232    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:08:53.944882    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:08:53.944894    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:08:53.958788    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:08:53.958800    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:08:53.971362    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:08:53.971374    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:08:56.485228    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:08:59.885497    4699 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/config.json ...
	I0802 11:08:59.886094    4699 machine.go:94] provisionDockerMachine start ...
	I0802 11:08:59.886230    4699 main.go:141] libmachine: Using SSH client type: native
	I0802 11:08:59.886661    4699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acea10] 0x102ad1270 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0802 11:08:59.886673    4699 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 11:08:59.961160    4699 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0802 11:08:59.961184    4699 buildroot.go:166] provisioning hostname "stopped-upgrade-387000"
	I0802 11:08:59.961256    4699 main.go:141] libmachine: Using SSH client type: native
	I0802 11:08:59.961400    4699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acea10] 0x102ad1270 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0802 11:08:59.961408    4699 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-387000 && echo "stopped-upgrade-387000" | sudo tee /etc/hostname
	I0802 11:09:00.021116    4699 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-387000
	
	I0802 11:09:00.021173    4699 main.go:141] libmachine: Using SSH client type: native
	I0802 11:09:00.021292    4699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acea10] 0x102ad1270 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0802 11:09:00.021301    4699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-387000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-387000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-387000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 11:09:00.079910    4699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 11:09:00.079926    4699 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19355-1243/.minikube CaCertPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19355-1243/.minikube}
	I0802 11:09:00.079936    4699 buildroot.go:174] setting up certificates
	I0802 11:09:00.079940    4699 provision.go:84] configureAuth start
	I0802 11:09:00.079949    4699 provision.go:143] copyHostCerts
	I0802 11:09:00.080017    4699 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.pem, removing ...
	I0802 11:09:00.080023    4699 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.pem
	I0802 11:09:00.080224    4699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.pem (1078 bytes)
	I0802 11:09:00.080417    4699 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-1243/.minikube/cert.pem, removing ...
	I0802 11:09:00.080420    4699 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-1243/.minikube/cert.pem
	I0802 11:09:00.080467    4699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19355-1243/.minikube/cert.pem (1123 bytes)
	I0802 11:09:00.080570    4699 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-1243/.minikube/key.pem, removing ...
	I0802 11:09:00.080573    4699 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-1243/.minikube/key.pem
	I0802 11:09:00.080618    4699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19355-1243/.minikube/key.pem (1675 bytes)
	I0802 11:09:00.080731    4699 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-387000 san=[127.0.0.1 localhost minikube stopped-upgrade-387000]
	I0802 11:09:00.185855    4699 provision.go:177] copyRemoteCerts
	I0802 11:09:00.185895    4699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 11:09:00.185903    4699 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/id_rsa Username:docker}
	I0802 11:09:00.218550    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 11:09:00.225130    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0802 11:09:00.232161    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 11:09:00.238991    4699 provision.go:87] duration metric: took 159.048292ms to configureAuth
	I0802 11:09:00.239000    4699 buildroot.go:189] setting minikube options for container-runtime
	I0802 11:09:00.239117    4699 config.go:182] Loaded profile config "stopped-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:09:00.239150    4699 main.go:141] libmachine: Using SSH client type: native
	I0802 11:09:00.239236    4699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acea10] 0x102ad1270 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0802 11:09:00.239241    4699 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0802 11:09:00.292556    4699 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0802 11:09:00.292565    4699 buildroot.go:70] root file system type: tmpfs
	I0802 11:09:00.292615    4699 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0802 11:09:00.292660    4699 main.go:141] libmachine: Using SSH client type: native
	I0802 11:09:00.292767    4699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acea10] 0x102ad1270 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0802 11:09:00.292800    4699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0802 11:09:00.350527    4699 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0802 11:09:00.350581    4699 main.go:141] libmachine: Using SSH client type: native
	I0802 11:09:00.350692    4699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acea10] 0x102ad1270 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0802 11:09:00.350701    4699 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0802 11:09:00.715955    4699 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0802 11:09:00.715967    4699 machine.go:97] duration metric: took 829.893833ms to provisionDockerMachine
	I0802 11:09:00.715974    4699 start.go:293] postStartSetup for "stopped-upgrade-387000" (driver="qemu2")
	I0802 11:09:00.715981    4699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 11:09:00.716044    4699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 11:09:00.716053    4699 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/id_rsa Username:docker}
	I0802 11:09:00.747103    4699 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 11:09:00.748325    4699 info.go:137] Remote host: Buildroot 2021.02.12
	I0802 11:09:00.748332    4699 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19355-1243/.minikube/addons for local assets ...
	I0802 11:09:00.748405    4699 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19355-1243/.minikube/files for local assets ...
	I0802 11:09:00.748519    4699 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19355-1243/.minikube/files/etc/ssl/certs/17472.pem -> 17472.pem in /etc/ssl/certs
	I0802 11:09:00.748617    4699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 11:09:00.751401    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/files/etc/ssl/certs/17472.pem --> /etc/ssl/certs/17472.pem (1708 bytes)
	I0802 11:09:00.758237    4699 start.go:296] duration metric: took 42.260292ms for postStartSetup
	I0802 11:09:00.758250    4699 fix.go:56] duration metric: took 20.803541167s for fixHost
	I0802 11:09:00.758282    4699 main.go:141] libmachine: Using SSH client type: native
	I0802 11:09:00.758403    4699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acea10] 0x102ad1270 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0802 11:09:00.758408    4699 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 11:09:00.810608    4699 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722622141.021780212
	
	I0802 11:09:00.810615    4699 fix.go:216] guest clock: 1722622141.021780212
	I0802 11:09:00.810623    4699 fix.go:229] Guest: 2024-08-02 11:09:01.021780212 -0700 PDT Remote: 2024-08-02 11:09:00.758251 -0700 PDT m=+20.924383001 (delta=263.529212ms)
	I0802 11:09:00.810633    4699 fix.go:200] guest clock delta is within tolerance: 263.529212ms
	I0802 11:09:00.810636    4699 start.go:83] releasing machines lock for "stopped-upgrade-387000", held for 20.855935958s
	I0802 11:09:00.810696    4699 ssh_runner.go:195] Run: cat /version.json
	I0802 11:09:00.810705    4699 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/id_rsa Username:docker}
	I0802 11:09:00.810696    4699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 11:09:00.810738    4699 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/id_rsa Username:docker}
	W0802 11:09:00.811246    4699 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50594->127.0.0.1:50471: write: broken pipe
	I0802 11:09:00.811262    4699 retry.go:31] will retry after 367.303703ms: ssh: handshake failed: write tcp 127.0.0.1:50594->127.0.0.1:50471: write: broken pipe
	W0802 11:09:01.229456    4699 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0802 11:09:01.229610    4699 ssh_runner.go:195] Run: systemctl --version
	I0802 11:09:01.233196    4699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 11:09:01.236753    4699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 11:09:01.236821    4699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0802 11:09:01.242213    4699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0802 11:09:01.253373    4699 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 11:09:01.253388    4699 start.go:495] detecting cgroup driver to use...
	I0802 11:09:01.253498    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 11:09:01.266116    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0802 11:09:01.269771    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0802 11:09:01.274747    4699 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0802 11:09:01.274818    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0802 11:09:01.278441    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0802 11:09:01.281527    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0802 11:09:01.284886    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0802 11:09:01.289223    4699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 11:09:01.292903    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0802 11:09:01.296127    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0802 11:09:01.299823    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0802 11:09:01.303267    4699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 11:09:01.305911    4699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 11:09:01.308772    4699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:09:01.379646    4699 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0802 11:09:01.387023    4699 start.go:495] detecting cgroup driver to use...
	I0802 11:09:01.387098    4699 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0802 11:09:01.392610    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 11:09:01.397940    4699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 11:09:01.404365    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 11:09:01.409044    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0802 11:09:01.413862    4699 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0802 11:09:01.455001    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0802 11:09:01.459927    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 11:09:01.465406    4699 ssh_runner.go:195] Run: which cri-dockerd
	I0802 11:09:01.466673    4699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0802 11:09:01.469187    4699 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0802 11:09:01.474414    4699 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0802 11:09:01.554951    4699 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0802 11:09:01.632077    4699 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0802 11:09:01.632154    4699 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0802 11:09:01.638124    4699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:09:01.719886    4699 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0802 11:09:02.882062    4699 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.162202334s)
	I0802 11:09:02.882127    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0802 11:09:02.886822    4699 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0802 11:09:02.892743    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0802 11:09:02.897622    4699 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0802 11:09:02.973078    4699 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0802 11:09:03.051779    4699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:09:03.131221    4699 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0802 11:09:03.138170    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0802 11:09:03.142768    4699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:09:03.219207    4699 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0802 11:09:03.265508    4699 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0802 11:09:03.265612    4699 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0802 11:09:03.267828    4699 start.go:563] Will wait 60s for crictl version
	I0802 11:09:03.267869    4699 ssh_runner.go:195] Run: which crictl
	I0802 11:09:03.269118    4699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 11:09:03.283871    4699 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0802 11:09:03.283932    4699 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0802 11:09:03.304666    4699 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0802 11:09:03.325383    4699 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0802 11:09:03.325449    4699 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0802 11:09:03.326683    4699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 11:09:03.330091    4699 kubeadm.go:883] updating cluster {Name:stopped-upgrade-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-387000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0802 11:09:03.330150    4699 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0802 11:09:03.330191    4699 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0802 11:09:03.340727    4699 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0802 11:09:03.340737    4699 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0802 11:09:03.340784    4699 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0802 11:09:03.344419    4699 ssh_runner.go:195] Run: which lz4
	I0802 11:09:03.345738    4699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 11:09:03.346982    4699 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 11:09:03.346992    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0802 11:09:04.239717    4699 docker.go:649] duration metric: took 894.036916ms to copy over tarball
	I0802 11:09:04.239792    4699 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 11:09:01.486686    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:01.486811    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:09:01.497410    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:09:01.497480    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:09:01.508075    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:09:01.508134    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:09:01.520228    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:09:01.520291    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:09:01.531129    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:09:01.531198    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:09:01.542533    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:09:01.542602    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:09:01.553534    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:09:01.553600    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:09:01.564439    4562 logs.go:276] 0 containers: []
	W0802 11:09:01.564450    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:09:01.564507    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:09:01.574893    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:09:01.574908    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:09:01.574913    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:09:01.586710    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:09:01.586721    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:09:01.626055    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:09:01.626072    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:09:01.644154    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:09:01.644166    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:09:01.657924    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:09:01.657938    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:09:01.672292    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:09:01.672306    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:09:01.684676    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:09:01.684689    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:09:01.698858    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:09:01.698871    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:09:01.716507    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:09:01.716524    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:09:01.729074    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:09:01.729085    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:09:01.733289    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:09:01.733295    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:09:01.767966    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:09:01.767977    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:09:01.779577    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:09:01.779591    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:09:01.791330    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:09:01.791340    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:09:01.803136    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:09:01.803152    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:09:01.828671    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:09:01.828681    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:09:01.844482    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:09:01.844496    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:09:04.357081    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:05.396966    4699 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.157201042s)
	I0802 11:09:05.396980    4699 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 11:09:05.412963    4699 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0802 11:09:05.416370    4699 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0802 11:09:05.421303    4699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:09:05.499035    4699 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0802 11:09:07.070674    4699 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.571678958s)
	I0802 11:09:07.070768    4699 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0802 11:09:07.082269    4699 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0802 11:09:07.082281    4699 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0802 11:09:07.082287    4699 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0802 11:09:07.088749    4699 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:09:07.090616    4699 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0802 11:09:07.092380    4699 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0802 11:09:07.092469    4699 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:09:07.094584    4699 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0802 11:09:07.094760    4699 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0802 11:09:07.095911    4699 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0802 11:09:07.096356    4699 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0802 11:09:07.097263    4699 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0802 11:09:07.098232    4699 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0802 11:09:07.098269    4699 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0802 11:09:07.099472    4699 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0802 11:09:07.099499    4699 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:09:07.099532    4699 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0802 11:09:07.100318    4699 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0802 11:09:07.100927    4699 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:09:07.552872    4699 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0802 11:09:07.552872    4699 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0802 11:09:07.564954    4699 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0802 11:09:07.564954    4699 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0802 11:09:07.572858    4699 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0802 11:09:07.572901    4699 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0802 11:09:07.572959    4699 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0802 11:09:07.575544    4699 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0802 11:09:07.575564    4699 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0802 11:09:07.575606    4699 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0802 11:09:07.586036    4699 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0802 11:09:07.591937    4699 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0802 11:09:07.597840    4699 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0802 11:09:07.597861    4699 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0802 11:09:07.597912    4699 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0802 11:09:07.598526    4699 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0802 11:09:07.598673    4699 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0802 11:09:07.598682    4699 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0802 11:09:07.598708    4699 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0802 11:09:07.603276    4699 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0802 11:09:07.609126    4699 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0802 11:09:07.609257    4699 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:09:07.612360    4699 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0802 11:09:07.612377    4699 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0802 11:09:07.612431    4699 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0802 11:09:07.622048    4699 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0802 11:09:07.622072    4699 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0802 11:09:07.622128    4699 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0802 11:09:07.624089    4699 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0802 11:09:07.632192    4699 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0802 11:09:07.632207    4699 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0802 11:09:07.632212    4699 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:09:07.632261    4699 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:09:07.634205    4699 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0802 11:09:07.634313    4699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0802 11:09:07.646219    4699 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0802 11:09:07.646239    4699 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0802 11:09:07.646270    4699 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0802 11:09:07.646284    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0802 11:09:07.646344    4699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0802 11:09:07.648617    4699 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0802 11:09:07.648638    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0802 11:09:07.685782    4699 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0802 11:09:07.685799    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0802 11:09:07.712089    4699 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0802 11:09:07.712131    4699 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0802 11:09:07.712140    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0802 11:09:07.749214    4699 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0802 11:09:07.875231    4699 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0802 11:09:07.875344    4699 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:09:07.887689    4699 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0802 11:09:07.887716    4699 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:09:07.887775    4699 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:09:07.904460    4699 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0802 11:09:07.904578    4699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0802 11:09:07.906046    4699 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0802 11:09:07.906059    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0802 11:09:07.932998    4699 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0802 11:09:07.933011    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0802 11:09:08.170542    4699 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0802 11:09:08.170582    4699 cache_images.go:92] duration metric: took 1.088328375s to LoadCachedImages
	W0802 11:09:08.170632    4699 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0802 11:09:08.170639    4699 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0802 11:09:08.170693    4699 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-387000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-387000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 11:09:08.170763    4699 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0802 11:09:08.186803    4699 cni.go:84] Creating CNI manager for ""
	I0802 11:09:08.186815    4699 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:09:08.186821    4699 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 11:09:08.186829    4699 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-387000 NodeName:stopped-upgrade-387000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 11:09:08.186902    4699 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-387000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 11:09:08.186961    4699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0802 11:09:08.189980    4699 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 11:09:08.190026    4699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 11:09:08.192717    4699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0802 11:09:08.197942    4699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 11:09:08.202676    4699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0802 11:09:08.207982    4699 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0802 11:09:08.209392    4699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 11:09:08.213010    4699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:09:08.287483    4699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 11:09:08.292517    4699 certs.go:68] Setting up /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000 for IP: 10.0.2.15
	I0802 11:09:08.292526    4699 certs.go:194] generating shared ca certs ...
	I0802 11:09:08.292534    4699 certs.go:226] acquiring lock for ca certs: {Name:mkac8babaf2bcf8bb25aa8e1753c51c03330d7ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:09:08.292697    4699 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.key
	I0802 11:09:08.292732    4699 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/proxy-client-ca.key
	I0802 11:09:08.292737    4699 certs.go:256] generating profile certs ...
	I0802 11:09:08.292804    4699 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/client.key
	I0802 11:09:08.292820    4699 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.key.684384e6
	I0802 11:09:08.292832    4699 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.crt.684384e6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0802 11:09:08.357945    4699 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.crt.684384e6 ...
	I0802 11:09:08.357959    4699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.crt.684384e6: {Name:mka86f54a14f32e9568dd2405cd0db2a37448308 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:09:08.358678    4699 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.key.684384e6 ...
	I0802 11:09:08.358684    4699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.key.684384e6: {Name:mk6dd8d61bfdc6521999136ed418d64b051deb1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:09:08.358860    4699 certs.go:381] copying /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.crt.684384e6 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.crt
	I0802 11:09:08.358986    4699 certs.go:385] copying /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.key.684384e6 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.key
	I0802 11:09:08.359131    4699 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/proxy-client.key
	I0802 11:09:08.359266    4699 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/1747.pem (1338 bytes)
	W0802 11:09:08.359292    4699 certs.go:480] ignoring /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/1747_empty.pem, impossibly tiny 0 bytes
	I0802 11:09:08.359297    4699 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 11:09:08.359317    4699 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem (1078 bytes)
	I0802 11:09:08.359335    4699 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem (1123 bytes)
	I0802 11:09:08.359358    4699 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/key.pem (1675 bytes)
	I0802 11:09:08.359400    4699 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/files/etc/ssl/certs/17472.pem (1708 bytes)
	I0802 11:09:08.359728    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 11:09:08.366909    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0802 11:09:08.373263    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 11:09:08.380472    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 11:09:08.387701    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0802 11:09:08.396718    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0802 11:09:08.404142    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 11:09:08.411653    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0802 11:09:08.418810    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/1747.pem --> /usr/share/ca-certificates/1747.pem (1338 bytes)
	I0802 11:09:08.425352    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/files/etc/ssl/certs/17472.pem --> /usr/share/ca-certificates/17472.pem (1708 bytes)
	I0802 11:09:08.432413    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 11:09:08.439304    4699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 11:09:08.444402    4699 ssh_runner.go:195] Run: openssl version
	I0802 11:09:08.446438    4699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17472.pem && ln -fs /usr/share/ca-certificates/17472.pem /etc/ssl/certs/17472.pem"
	I0802 11:09:08.449160    4699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17472.pem
	I0802 11:09:08.450513    4699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:35 /usr/share/ca-certificates/17472.pem
	I0802 11:09:08.450531    4699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17472.pem
	I0802 11:09:08.452286    4699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 11:09:08.455317    4699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 11:09:08.457984    4699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 11:09:08.459356    4699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:26 /usr/share/ca-certificates/minikubeCA.pem
	I0802 11:09:08.459373    4699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 11:09:08.461078    4699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 11:09:08.464255    4699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1747.pem && ln -fs /usr/share/ca-certificates/1747.pem /etc/ssl/certs/1747.pem"
	I0802 11:09:08.467353    4699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1747.pem
	I0802 11:09:08.468649    4699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:35 /usr/share/ca-certificates/1747.pem
	I0802 11:09:08.468665    4699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1747.pem
	I0802 11:09:08.470528    4699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1747.pem /etc/ssl/certs/51391683.0"
	I0802 11:09:08.473347    4699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 11:09:08.474732    4699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 11:09:08.476658    4699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 11:09:08.478351    4699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 11:09:08.480191    4699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 11:09:08.481988    4699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 11:09:08.483623    4699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 11:09:08.485320    4699 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-387000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0802 11:09:08.485382    4699 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0802 11:09:08.495337    4699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 11:09:08.498446    4699 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0802 11:09:08.498451    4699 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0802 11:09:08.498474    4699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0802 11:09:08.501072    4699 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0802 11:09:08.501377    4699 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-387000" does not appear in /Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:09:08.501485    4699 kubeconfig.go:62] /Users/jenkins/minikube-integration/19355-1243/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-387000" cluster setting kubeconfig missing "stopped-upgrade-387000" context setting]
	I0802 11:09:08.501690    4699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/kubeconfig: {Name:mkee875f598bd0a8f78c04f09a48257e74d5dd54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:09:08.502202    4699 kapi.go:59] client config for stopped-upgrade-387000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/client.key", CAFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103e641b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0802 11:09:08.502543    4699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0802 11:09:08.505131    4699 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-387000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0802 11:09:08.505138    4699 kubeadm.go:1160] stopping kube-system containers ...
	I0802 11:09:08.505178    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0802 11:09:08.515726    4699 docker.go:483] Stopping containers: [06f4cb7c5b7d 8d6ae6ac7f08 c62a1899d653 0237f334d11e 241be9c6963f beaa5f7a2b37 179baee8dbee 15f78d53f678]
	I0802 11:09:08.515790    4699 ssh_runner.go:195] Run: docker stop 06f4cb7c5b7d 8d6ae6ac7f08 c62a1899d653 0237f334d11e 241be9c6963f beaa5f7a2b37 179baee8dbee 15f78d53f678
	I0802 11:09:08.526602    4699 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0802 11:09:08.531980    4699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 11:09:08.534805    4699 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 11:09:08.534813    4699 kubeadm.go:157] found existing configuration files:
	
	I0802 11:09:08.534836    4699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf
	I0802 11:09:08.537186    4699 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 11:09:08.537203    4699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 11:09:08.540051    4699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf
	I0802 11:09:08.542743    4699 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 11:09:08.542765    4699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 11:09:08.545151    4699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf
	I0802 11:09:08.548003    4699 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 11:09:08.548024    4699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 11:09:08.550565    4699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf
	I0802 11:09:08.552799    4699 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 11:09:08.552818    4699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 11:09:08.555708    4699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 11:09:08.558530    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 11:09:08.582430    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 11:09:09.020832    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0802 11:09:09.153011    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 11:09:09.178895    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0802 11:09:09.202707    4699 api_server.go:52] waiting for apiserver process to appear ...
	I0802 11:09:09.202792    4699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 11:09:09.704814    4699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 11:09:09.358839    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:09.358952    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:09:09.369861    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:09:09.369943    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:09:09.380210    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:09:09.380281    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:09:09.390732    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:09:09.390819    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:09:09.401054    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:09:09.401123    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:09:09.411675    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:09:09.411745    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:09:09.422664    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:09:09.422730    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:09:09.432297    4562 logs.go:276] 0 containers: []
	W0802 11:09:09.432309    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:09:09.432365    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:09:09.443273    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:09:09.443291    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:09:09.443297    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:09:09.448031    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:09:09.448040    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:09:09.470487    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:09:09.470501    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:09:09.482512    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:09:09.482525    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:09:09.503238    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:09:09.503248    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:09:09.522486    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:09:09.522497    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:09:09.535278    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:09:09.535288    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:09:09.546639    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:09:09.546650    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:09:09.582478    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:09:09.582489    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:09:09.606450    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:09:09.606461    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:09:09.618396    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:09:09.618410    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:09:09.630525    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:09:09.630541    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:09:09.643084    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:09:09.643098    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:09:09.654750    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:09:09.654762    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:09:09.668030    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:09:09.668041    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:09:09.681719    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:09:09.681757    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:09:09.723176    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:09:09.723189    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:09:10.204789    4699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 11:09:10.209102    4699 api_server.go:72] duration metric: took 1.006431875s to wait for apiserver process to appear ...
	I0802 11:09:10.209112    4699 api_server.go:88] waiting for apiserver healthz status ...
	I0802 11:09:10.209123    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:12.239723    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:15.211087    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:15.211118    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:17.241832    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:17.242049    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:09:17.254305    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:09:17.254378    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:09:17.264901    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:09:17.264963    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:09:17.275901    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:09:17.275972    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:09:17.286530    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:09:17.286597    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:09:17.297010    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:09:17.297084    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:09:17.307796    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:09:17.307865    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:09:17.318308    4562 logs.go:276] 0 containers: []
	W0802 11:09:17.318321    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:09:17.318387    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:09:17.329186    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:09:17.329209    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:09:17.329214    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:09:17.340417    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:09:17.340430    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:09:17.352194    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:09:17.352204    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:09:17.363200    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:09:17.363211    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:09:17.375415    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:09:17.375426    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:09:17.387417    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:09:17.387427    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:09:17.425098    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:09:17.425113    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:09:17.447313    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:09:17.447321    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:09:17.451620    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:09:17.451626    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:09:17.465802    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:09:17.465815    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:09:17.477747    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:09:17.477757    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:09:17.489092    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:09:17.489106    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:09:17.527515    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:09:17.527529    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:09:17.541374    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:09:17.541384    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:09:17.554579    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:09:17.554588    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:09:17.572534    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:09:17.572549    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:09:17.584821    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:09:17.584832    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:09:20.211168    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:20.211187    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:20.098589    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:25.211305    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:25.211320    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:25.100652    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:25.100872    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:09:25.118113    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:09:25.118204    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:09:25.132081    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:09:25.132162    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:09:25.147616    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:09:25.147686    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:09:25.157492    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:09:25.157557    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:09:25.167487    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:09:25.167555    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:09:25.177559    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:09:25.177625    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:09:25.187922    4562 logs.go:276] 0 containers: []
	W0802 11:09:25.187933    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:09:25.187995    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:09:25.198751    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:09:25.198767    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:09:25.198772    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:09:25.210416    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:09:25.210428    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:09:25.222227    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:09:25.222239    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:09:25.227079    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:09:25.227085    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:09:25.239128    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:09:25.239139    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:09:25.253014    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:09:25.253024    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:09:25.270998    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:09:25.271009    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:09:25.294997    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:09:25.295004    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:09:25.306081    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:09:25.306093    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:09:25.317649    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:09:25.317661    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:09:25.329621    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:09:25.329631    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:09:25.341394    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:09:25.341403    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:09:25.353599    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:09:25.353613    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:09:25.393160    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:09:25.393167    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:09:25.428710    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:09:25.428721    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:09:25.442409    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:09:25.442420    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:09:25.457420    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:09:25.457430    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:09:27.971708    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:30.211524    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:30.211568    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:32.972720    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:32.972973    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:09:33.000575    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:09:33.000696    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:09:33.019025    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:09:33.019103    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:09:33.031895    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:09:33.031965    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:09:33.044118    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:09:33.044204    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:09:33.054936    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:09:33.055037    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:09:33.065469    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:09:33.065559    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:09:33.076018    4562 logs.go:276] 0 containers: []
	W0802 11:09:33.076030    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:09:33.076083    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:09:33.086156    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:09:33.086174    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:09:33.086180    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:09:33.098126    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:09:33.098137    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:09:33.112235    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:09:33.112245    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:09:33.123833    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:09:33.123843    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:09:33.135339    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:09:33.135351    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:09:33.153334    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:09:33.153344    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:09:33.165644    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:09:33.165658    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:09:33.190111    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:09:33.190124    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:09:33.229510    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:09:33.229519    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:09:33.265090    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:09:33.265103    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:09:33.278338    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:09:33.278349    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:09:33.292544    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:09:33.292554    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:09:33.311803    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:09:33.311816    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:09:33.323830    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:09:33.323845    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:09:33.328789    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:09:33.328797    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:09:33.343738    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:09:33.343750    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:09:33.356777    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:09:33.356787    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:09:35.211963    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:35.212012    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:35.870516    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:40.212719    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:40.212752    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:40.871234    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:40.871575    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:09:40.900489    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:09:40.900652    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:09:40.919209    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:09:40.919292    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:09:40.932602    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:09:40.932674    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:09:40.943938    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:09:40.944010    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:09:40.954303    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:09:40.954377    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:09:40.965132    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:09:40.965202    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:09:40.975009    4562 logs.go:276] 0 containers: []
	W0802 11:09:40.975020    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:09:40.975078    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:09:40.985804    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:09:40.985821    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:09:40.985827    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:09:40.997040    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:09:40.997053    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:09:41.020122    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:09:41.020133    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:09:41.056778    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:09:41.056787    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:09:41.061751    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:09:41.061760    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:09:41.097966    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:09:41.097978    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:09:41.110268    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:09:41.110283    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:09:41.127975    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:09:41.127985    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:09:41.148330    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:09:41.148344    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:09:41.162129    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:09:41.162140    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:09:41.175695    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:09:41.175704    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:09:41.188344    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:09:41.188357    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:09:41.199456    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:09:41.199466    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:09:41.210858    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:09:41.210870    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:09:41.224780    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:09:41.224792    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:09:41.235884    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:09:41.235894    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:09:41.248629    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:09:41.248640    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:09:43.761482    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:45.213569    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:45.213640    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:48.763612    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:48.763898    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:09:48.797773    4562 logs.go:276] 2 containers: [c09d11446cc1 68ac2873ee50]
	I0802 11:09:48.797860    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:09:48.826154    4562 logs.go:276] 2 containers: [83a8068ecd07 78baef9bff76]
	I0802 11:09:48.826248    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:09:48.843506    4562 logs.go:276] 1 containers: [87abc444edba]
	I0802 11:09:48.843586    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:09:48.855626    4562 logs.go:276] 2 containers: [4bffeec09c81 27cb24e2108d]
	I0802 11:09:48.855697    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:09:48.866384    4562 logs.go:276] 1 containers: [828ee52e1927]
	I0802 11:09:48.866454    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:09:48.876877    4562 logs.go:276] 2 containers: [d4ad7d25e56f e9b01549a648]
	I0802 11:09:48.876946    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:09:48.886994    4562 logs.go:276] 0 containers: []
	W0802 11:09:48.887006    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:09:48.887076    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:09:48.899154    4562 logs.go:276] 2 containers: [d76b769c8751 5fce1e971494]
	I0802 11:09:48.899172    4562 logs.go:123] Gathering logs for kube-apiserver [68ac2873ee50] ...
	I0802 11:09:48.899179    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ac2873ee50"
	I0802 11:09:48.916810    4562 logs.go:123] Gathering logs for storage-provisioner [5fce1e971494] ...
	I0802 11:09:48.916821    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fce1e971494"
	I0802 11:09:48.928843    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:09:48.928852    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:09:48.968078    4562 logs.go:123] Gathering logs for kube-controller-manager [e9b01549a648] ...
	I0802 11:09:48.968088    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9b01549a648"
	I0802 11:09:48.979792    4562 logs.go:123] Gathering logs for kube-apiserver [c09d11446cc1] ...
	I0802 11:09:48.979805    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c09d11446cc1"
	I0802 11:09:48.993762    4562 logs.go:123] Gathering logs for etcd [83a8068ecd07] ...
	I0802 11:09:48.993774    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83a8068ecd07"
	I0802 11:09:49.008179    4562 logs.go:123] Gathering logs for kube-scheduler [27cb24e2108d] ...
	I0802 11:09:49.008190    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27cb24e2108d"
	I0802 11:09:49.027200    4562 logs.go:123] Gathering logs for storage-provisioner [d76b769c8751] ...
	I0802 11:09:49.027212    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76b769c8751"
	I0802 11:09:49.038587    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:09:49.038602    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:09:49.061277    4562 logs.go:123] Gathering logs for kube-controller-manager [d4ad7d25e56f] ...
	I0802 11:09:49.061288    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4ad7d25e56f"
	I0802 11:09:49.078988    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:09:49.078999    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:09:49.091014    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:09:49.091025    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:09:49.095280    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:09:49.095288    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:09:49.130382    4562 logs.go:123] Gathering logs for etcd [78baef9bff76] ...
	I0802 11:09:49.130393    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78baef9bff76"
	I0802 11:09:49.144206    4562 logs.go:123] Gathering logs for coredns [87abc444edba] ...
	I0802 11:09:49.144215    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87abc444edba"
	I0802 11:09:49.159527    4562 logs.go:123] Gathering logs for kube-scheduler [4bffeec09c81] ...
	I0802 11:09:49.159538    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bffeec09c81"
	I0802 11:09:49.171010    4562 logs.go:123] Gathering logs for kube-proxy [828ee52e1927] ...
	I0802 11:09:49.171020    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828ee52e1927"
	I0802 11:09:50.215012    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:50.215142    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:51.685553    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:56.688120    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:56.688228    4562 kubeadm.go:597] duration metric: took 4m4.350327625s to restartPrimaryControlPlane
	W0802 11:09:56.688277    4562 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0802 11:09:56.688298    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0802 11:09:57.638134    4562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 11:09:57.643543    4562 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 11:09:57.646827    4562 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 11:09:57.649484    4562 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 11:09:57.649491    4562 kubeadm.go:157] found existing configuration files:
	
	I0802 11:09:57.649514    4562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/admin.conf
	I0802 11:09:57.652563    4562 kubeadm.go:163] "https://control-plane.minikube.internal:50312" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 11:09:57.652590    4562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 11:09:57.655623    4562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/kubelet.conf
	I0802 11:09:57.657906    4562 kubeadm.go:163] "https://control-plane.minikube.internal:50312" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 11:09:57.657924    4562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 11:09:57.660965    4562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/controller-manager.conf
	I0802 11:09:57.663958    4562 kubeadm.go:163] "https://control-plane.minikube.internal:50312" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 11:09:57.663980    4562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 11:09:57.666445    4562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/scheduler.conf
	I0802 11:09:57.669164    4562 kubeadm.go:163] "https://control-plane.minikube.internal:50312" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50312 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 11:09:57.669190    4562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 11:09:57.672370    4562 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 11:09:57.689914    4562 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0802 11:09:57.690015    4562 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 11:09:57.741170    4562 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 11:09:57.741234    4562 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 11:09:57.741289    4562 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 11:09:57.789812    4562 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 11:09:57.792974    4562 out.go:204]   - Generating certificates and keys ...
	I0802 11:09:57.793009    4562 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 11:09:57.793046    4562 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 11:09:57.793083    4562 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0802 11:09:57.793110    4562 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0802 11:09:57.793151    4562 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0802 11:09:57.793176    4562 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0802 11:09:57.793218    4562 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0802 11:09:57.793278    4562 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0802 11:09:57.793316    4562 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0802 11:09:57.793367    4562 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0802 11:09:57.793412    4562 kubeadm.go:310] [certs] Using the existing "sa" key
	I0802 11:09:57.793443    4562 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 11:09:58.167783    4562 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 11:09:58.236612    4562 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 11:09:58.328958    4562 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 11:09:58.489125    4562 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 11:09:58.517125    4562 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 11:09:58.517480    4562 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 11:09:58.517510    4562 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 11:09:58.613124    4562 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 11:09:55.215875    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:55.215955    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:58.617043    4562 out.go:204]   - Booting up control plane ...
	I0802 11:09:58.617095    4562 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 11:09:58.617144    4562 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 11:09:58.617174    4562 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 11:09:58.617236    4562 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 11:09:58.617320    4562 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0802 11:10:03.119588    4562 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502442 seconds
	I0802 11:10:03.119670    4562 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 11:10:03.124351    4562 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 11:10:03.635764    4562 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 11:10:03.635939    4562 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-894000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 11:10:04.143623    4562 kubeadm.go:310] [bootstrap-token] Using token: 76xl6n.zpcdcvfslw7pcvqc
	I0802 11:10:04.149705    4562 out.go:204]   - Configuring RBAC rules ...
	I0802 11:10:04.149758    4562 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 11:10:04.149805    4562 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 11:10:04.153334    4562 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 11:10:04.154412    4562 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 11:10:04.156577    4562 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 11:10:04.157478    4562 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 11:10:04.160542    4562 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 11:10:04.330292    4562 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 11:10:04.547567    4562 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 11:10:04.548050    4562 kubeadm.go:310] 
	I0802 11:10:04.548079    4562 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 11:10:04.548138    4562 kubeadm.go:310] 
	I0802 11:10:04.548172    4562 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 11:10:04.548175    4562 kubeadm.go:310] 
	I0802 11:10:04.548190    4562 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 11:10:04.548223    4562 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 11:10:04.548250    4562 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 11:10:04.548253    4562 kubeadm.go:310] 
	I0802 11:10:04.548334    4562 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 11:10:04.548338    4562 kubeadm.go:310] 
	I0802 11:10:04.548434    4562 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 11:10:04.548437    4562 kubeadm.go:310] 
	I0802 11:10:04.548461    4562 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 11:10:04.548538    4562 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 11:10:04.548614    4562 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 11:10:04.548620    4562 kubeadm.go:310] 
	I0802 11:10:04.548679    4562 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 11:10:04.548776    4562 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 11:10:04.548782    4562 kubeadm.go:310] 
	I0802 11:10:04.548836    4562 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 76xl6n.zpcdcvfslw7pcvqc \
	I0802 11:10:04.548896    4562 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f9320a40b5936daeb22249c1a98fe573be47e358012961e7ff0a8e7d01ac6b4d \
	I0802 11:10:04.548913    4562 kubeadm.go:310] 	--control-plane 
	I0802 11:10:04.548916    4562 kubeadm.go:310] 
	I0802 11:10:04.548984    4562 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 11:10:04.548991    4562 kubeadm.go:310] 
	I0802 11:10:04.549034    4562 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 76xl6n.zpcdcvfslw7pcvqc \
	I0802 11:10:04.549082    4562 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f9320a40b5936daeb22249c1a98fe573be47e358012961e7ff0a8e7d01ac6b4d 
	I0802 11:10:04.549128    4562 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 11:10:04.549138    4562 cni.go:84] Creating CNI manager for ""
	I0802 11:10:04.549146    4562 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:10:04.553306    4562 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 11:10:04.560330    4562 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 11:10:04.563912    4562 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 11:10:04.568712    4562 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 11:10:04.568763    4562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 11:10:04.568806    4562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-894000 minikube.k8s.io/updated_at=2024_08_02T11_10_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=running-upgrade-894000 minikube.k8s.io/primary=true
	I0802 11:10:04.609394    4562 ops.go:34] apiserver oom_adj: -16
	I0802 11:10:04.609408    4562 kubeadm.go:1113] duration metric: took 40.688833ms to wait for elevateKubeSystemPrivileges
	I0802 11:10:04.609433    4562 kubeadm.go:394] duration metric: took 4m12.285540875s to StartCluster
	I0802 11:10:04.609443    4562 settings.go:142] acquiring lock: {Name:mke9d9a6b3c42219545f5aed5860e740f1b28aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:10:04.609539    4562 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:10:04.609925    4562 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/kubeconfig: {Name:mkee875f598bd0a8f78c04f09a48257e74d5dd54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:10:04.610150    4562 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:10:04.610159    4562 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 11:10:04.610195    4562 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-894000"
	I0802 11:10:04.610205    4562 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-894000"
	I0802 11:10:04.610207    4562 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-894000"
	W0802 11:10:04.610208    4562 addons.go:243] addon storage-provisioner should already be in state true
	I0802 11:10:04.610217    4562 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-894000"
	I0802 11:10:04.610223    4562 host.go:66] Checking if "running-upgrade-894000" exists ...
	I0802 11:10:04.610247    4562 config.go:182] Loaded profile config "running-upgrade-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:10:04.611143    4562 kapi.go:59] client config for running-upgrade-894000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/running-upgrade-894000/client.key", CAFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103eb81b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0802 11:10:04.611265    4562 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-894000"
	W0802 11:10:04.611270    4562 addons.go:243] addon default-storageclass should already be in state true
	I0802 11:10:04.611277    4562 host.go:66] Checking if "running-upgrade-894000" exists ...
	I0802 11:10:04.613248    4562 out.go:177] * Verifying Kubernetes components...
	I0802 11:10:04.613528    4562 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 11:10:04.617476    4562 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 11:10:04.617484    4562 sshutil.go:53] new ssh client: &{IP:localhost Port:50280 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/running-upgrade-894000/id_rsa Username:docker}
	I0802 11:10:04.621069    4562 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:10:00.217627    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:00.217751    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:04.624251    4562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:10:04.628303    4562 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 11:10:04.628309    4562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 11:10:04.628314    4562 sshutil.go:53] new ssh client: &{IP:localhost Port:50280 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/running-upgrade-894000/id_rsa Username:docker}
	I0802 11:10:04.717485    4562 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 11:10:04.722541    4562 api_server.go:52] waiting for apiserver process to appear ...
	I0802 11:10:04.722580    4562 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 11:10:04.726458    4562 api_server.go:72] duration metric: took 116.301833ms to wait for apiserver process to appear ...
	I0802 11:10:04.726464    4562 api_server.go:88] waiting for apiserver healthz status ...
	I0802 11:10:04.726470    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:04.732921    4562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 11:10:04.756215    4562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 11:10:05.217954    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:05.217973    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:09.728469    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:09.728536    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:10.219956    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:10.220121    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:10:10.240034    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:10:10.240131    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:10:10.252370    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:10:10.252452    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:10:10.262951    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:10:10.263026    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:10:10.273503    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:10:10.273574    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:10:10.284493    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:10:10.284568    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:10:10.295228    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:10:10.295292    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:10:10.305164    4699 logs.go:276] 0 containers: []
	W0802 11:10:10.305176    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:10:10.305229    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:10:10.315910    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:10:10.315927    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:10:10.315933    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:10:10.327411    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:10:10.327422    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:10:10.340175    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:10:10.340187    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:10:10.352337    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:10:10.352351    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:10:10.356867    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:10:10.356873    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:10:10.371421    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:10:10.371432    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:10:10.386296    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:10:10.386306    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:10:10.405087    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:10:10.405099    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:10:10.418915    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:10:10.418926    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:10:10.438762    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:10:10.438773    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:10:10.454096    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:10:10.454109    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:10:10.570478    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:10:10.570490    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:10:10.583320    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:10:10.583331    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:10:10.607885    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:10:10.607895    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:10:10.644960    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:10:10.644968    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:10:10.658526    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:10:10.658540    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:10:10.670638    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:10:10.670650    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:10:13.213023    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:14.728737    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:14.728776    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:18.214233    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:18.214403    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:10:18.226437    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:10:18.226515    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:10:18.237522    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:10:18.237596    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:10:18.247842    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:10:18.247913    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:10:18.257842    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:10:18.257910    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:10:18.268823    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:10:18.268893    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:10:18.279666    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:10:18.279735    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:10:18.289681    4699 logs.go:276] 0 containers: []
	W0802 11:10:18.289692    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:10:18.289752    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:10:18.301107    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:10:18.301127    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:10:18.301133    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:10:18.312569    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:10:18.312583    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:10:18.324428    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:10:18.324439    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:10:18.364619    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:10:18.364629    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:10:18.378700    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:10:18.378710    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:10:18.390672    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:10:18.390687    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:10:18.408585    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:10:18.408596    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:10:18.420072    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:10:18.420083    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:10:18.437766    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:10:18.437777    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:10:18.449685    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:10:18.449697    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:10:18.495451    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:10:18.495469    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:10:18.511315    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:10:18.511327    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:10:18.528449    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:10:18.528459    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:10:18.540001    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:10:18.540012    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:10:18.566848    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:10:18.566865    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:10:18.571348    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:10:18.571355    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:10:18.607648    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:10:18.607659    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:10:19.729072    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:19.729116    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:21.128355    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:24.729529    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:24.729580    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:26.130659    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:26.130900    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:10:26.156174    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:10:26.156290    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:10:26.172937    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:10:26.173021    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:10:26.185873    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:10:26.185952    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:10:26.198603    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:10:26.198678    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:10:26.211328    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:10:26.211403    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:10:26.221680    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:10:26.221745    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:10:26.232142    4699 logs.go:276] 0 containers: []
	W0802 11:10:26.232152    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:10:26.232211    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:10:26.243206    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:10:26.243224    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:10:26.243231    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:10:26.256461    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:10:26.256471    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:10:26.267407    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:10:26.267419    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:10:26.279426    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:10:26.279442    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:10:26.294827    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:10:26.294840    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:10:26.310151    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:10:26.310161    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:10:26.349596    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:10:26.349605    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:10:26.364137    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:10:26.364151    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:10:26.402171    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:10:26.402183    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:10:26.413681    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:10:26.413693    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:10:26.418261    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:10:26.418268    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:10:26.433074    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:10:26.433086    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:10:26.472221    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:10:26.472232    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:10:26.483407    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:10:26.483419    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:10:26.508629    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:10:26.508637    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:10:26.522864    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:10:26.522874    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:10:26.539500    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:10:26.539511    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:10:29.052707    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:29.730083    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:29.730108    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:34.054936    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:34.055230    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:10:34.073687    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:10:34.073780    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:10:34.087232    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:10:34.087307    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:10:34.098721    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:10:34.098795    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:10:34.111974    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:10:34.112048    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:10:34.121910    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:10:34.121973    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:10:34.131960    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:10:34.132036    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:10:34.142103    4699 logs.go:276] 0 containers: []
	W0802 11:10:34.142115    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:10:34.142169    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:10:34.152971    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:10:34.152989    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:10:34.152994    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:10:34.164827    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:10:34.164836    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:10:34.189445    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:10:34.189454    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:10:34.205667    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:10:34.205678    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:10:34.245903    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:10:34.245913    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:10:34.282778    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:10:34.282793    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:10:34.297048    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:10:34.297060    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:10:34.315062    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:10:34.315073    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:10:34.329928    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:10:34.329941    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:10:34.341597    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:10:34.341608    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:10:34.381043    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:10:34.381053    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:10:34.385765    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:10:34.385772    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:10:34.399274    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:10:34.399286    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:10:34.410616    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:10:34.410631    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:10:34.421509    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:10:34.421522    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:10:34.433517    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:10:34.433529    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:10:34.450578    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:10:34.450591    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:10:34.730743    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:34.730782    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0802 11:10:35.073471    4562 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0802 11:10:35.078993    4562 out.go:177] * Enabled addons: storage-provisioner
	I0802 11:10:36.968148    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:35.086920    4562 addons.go:510] duration metric: took 30.47783375s for enable addons: enabled=[storage-provisioner]
	I0802 11:10:39.731719    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:39.731768    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:41.970444    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:41.970603    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:10:41.984287    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:10:41.984363    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:10:41.995931    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:10:41.996010    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:10:42.007763    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:10:42.007841    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:10:42.024164    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:10:42.024238    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:10:42.035927    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:10:42.035991    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:10:42.046511    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:10:42.046584    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:10:42.056428    4699 logs.go:276] 0 containers: []
	W0802 11:10:42.056441    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:10:42.056508    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:10:42.066809    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:10:42.066828    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:10:42.066833    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:10:42.080878    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:10:42.080894    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:10:42.092837    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:10:42.092846    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:10:42.108991    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:10:42.109001    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:10:42.121177    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:10:42.121189    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:10:42.125818    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:10:42.125824    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:10:42.145484    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:10:42.145495    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:10:42.159664    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:10:42.159673    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:10:42.170725    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:10:42.170737    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:10:42.208229    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:10:42.208240    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:10:42.244977    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:10:42.244988    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:10:42.282851    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:10:42.282869    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:10:42.293963    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:10:42.293976    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:10:42.312260    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:10:42.312269    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:10:42.326099    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:10:42.326113    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:10:42.338398    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:10:42.338412    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:10:42.350420    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:10:42.350431    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:10:44.733132    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:44.733180    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:44.874319    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:49.734878    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:49.734906    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:49.876365    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:49.876487    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:10:49.890225    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:10:49.890299    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:10:49.901583    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:10:49.901656    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:10:49.912437    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:10:49.912506    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:10:49.922823    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:10:49.922893    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:10:49.933026    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:10:49.933096    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:10:49.943547    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:10:49.943621    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:10:49.954131    4699 logs.go:276] 0 containers: []
	W0802 11:10:49.954152    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:10:49.954211    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:10:49.964687    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:10:49.964704    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:10:49.964710    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:10:49.980381    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:10:49.980392    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:10:49.992157    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:10:49.992169    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:10:50.013951    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:10:50.013968    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:10:50.027764    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:10:50.027774    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:10:50.031867    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:10:50.031873    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:10:50.068529    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:10:50.068548    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:10:50.080221    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:10:50.080234    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:10:50.092407    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:10:50.092420    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:10:50.137844    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:10:50.137858    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:10:50.151789    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:10:50.151802    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:10:50.166078    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:10:50.166091    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:10:50.191013    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:10:50.191024    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:10:50.230111    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:10:50.230126    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:10:50.242157    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:10:50.242170    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:10:50.257111    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:10:50.257124    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:10:50.268557    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:10:50.268568    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:10:52.782407    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:54.735646    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:54.735673    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:57.784584    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:57.784777    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:10:57.802096    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:10:57.802189    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:10:57.815540    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:10:57.815622    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:10:57.828840    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:10:57.828902    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:10:57.839861    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:10:57.839936    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:10:57.850450    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:10:57.850520    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:10:57.861027    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:10:57.861093    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:10:57.871194    4699 logs.go:276] 0 containers: []
	W0802 11:10:57.871205    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:10:57.871265    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:10:57.882842    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:10:57.882861    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:10:57.882866    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:10:57.920536    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:10:57.920546    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:10:57.957085    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:10:57.957097    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:10:57.971846    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:10:57.971861    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:10:58.010236    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:10:58.010252    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:10:58.023705    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:10:58.023715    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:10:58.048798    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:10:58.048814    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:10:58.065547    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:10:58.065559    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:10:58.077028    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:10:58.077041    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:10:58.098361    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:10:58.098371    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:10:58.110573    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:10:58.110582    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:10:58.123504    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:10:58.123516    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:10:58.134851    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:10:58.134867    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:10:58.138923    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:10:58.138930    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:10:58.151003    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:10:58.151017    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:10:58.165849    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:10:58.165859    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:10:58.177354    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:10:58.177366    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:10:59.737669    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:59.737710    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:00.702288    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:04.739884    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:04.740064    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:04.758912    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:11:04.758991    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:04.783595    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:11:04.783664    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:04.795045    4562 logs.go:276] 2 containers: [1fbb8e62e165 e2699333b635]
	I0802 11:11:04.795123    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:04.805774    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:11:04.805835    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:04.816458    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:11:04.816518    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:04.826947    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:11:04.827016    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:04.837958    4562 logs.go:276] 0 containers: []
	W0802 11:11:04.837971    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:04.838027    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:04.848493    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:11:04.848508    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:11:04.848514    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:11:04.860215    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:11:04.860229    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:11:04.877658    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:04.877669    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:04.902828    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:11:04.902839    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:11:04.917053    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:11:04.917064    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:11:04.930976    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:11:04.930990    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:11:04.942457    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:11:04.942468    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:11:04.957997    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:11:04.958007    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:11:05.704543    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:05.704747    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:05.725637    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:11:05.725734    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:05.740908    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:11:05.740985    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:05.754827    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:11:05.754892    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:05.765944    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:11:05.766022    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:05.776786    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:11:05.776851    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:05.794245    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:11:05.794315    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:05.805285    4699 logs.go:276] 0 containers: []
	W0802 11:11:05.805298    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:05.805359    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:05.826773    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:11:05.826792    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:11:05.826797    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:11:05.840877    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:11:05.840888    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:11:05.857730    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:05.857740    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:05.896485    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:11:05.896495    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:11:05.910686    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:11:05.910697    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:11:05.924754    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:11:05.924767    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:11:05.936553    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:05.936563    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:05.962964    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:05.962972    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:05.996631    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:11:05.996642    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:11:06.034860    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:11:06.034870    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:11:06.046318    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:11:06.046328    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:11:06.064337    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:11:06.064350    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:06.076282    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:11:06.076292    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:11:06.088448    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:11:06.088460    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:11:06.105940    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:11:06.105951    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:11:06.119746    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:06.119757    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:06.124265    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:11:06.124271    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:11:08.637632    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:04.969706    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:11:04.969716    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:04.981154    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:04.981164    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:05.015976    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:05.015987    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:05.020374    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:05.020383    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:05.062782    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:11:05.062793    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:11:07.579106    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:13.639839    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:13.640101    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:13.659771    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:11:13.659857    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:13.674816    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:11:13.674904    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:13.687104    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:11:13.687176    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:13.697641    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:11:13.697718    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:13.708294    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:11:13.708364    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:13.718974    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:11:13.719045    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:13.729432    4699 logs.go:276] 0 containers: []
	W0802 11:11:13.729446    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:13.729525    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:13.740189    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:11:13.740208    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:11:13.740213    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:11:13.754347    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:11:13.754358    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:11:13.767419    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:11:13.767431    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:11:13.779331    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:11:13.779342    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:11:13.797010    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:13.797021    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:13.833820    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:11:13.833832    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:11:13.872932    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:11:13.872943    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:11:13.885400    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:11:13.885413    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:11:13.904259    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:11:13.904270    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:11:13.923364    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:11:13.923376    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:13.935530    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:11:13.935542    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:11:13.947380    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:13.947390    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:13.970713    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:11:13.970722    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:11:13.984248    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:11:13.984259    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:11:13.996013    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:11:13.996027    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:11:14.009655    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:14.009665    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:14.047644    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:14.047652    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:12.581225    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:12.581569    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:12.610201    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:11:12.610343    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:12.632362    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:11:12.632473    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:12.645373    4562 logs.go:276] 2 containers: [1fbb8e62e165 e2699333b635]
	I0802 11:11:12.645440    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:12.656812    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:11:12.656882    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:12.674617    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:11:12.674686    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:12.685410    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:11:12.685474    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:12.695589    4562 logs.go:276] 0 containers: []
	W0802 11:11:12.695601    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:12.695659    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:12.707231    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:11:12.707246    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:11:12.707251    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:12.718922    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:12.718933    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:12.754307    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:12.754317    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:12.759008    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:11:12.759014    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:11:12.776633    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:11:12.776642    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:11:12.787966    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:11:12.787978    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:11:12.803455    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:11:12.803466    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:11:12.826193    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:12.826203    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:12.863415    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:11:12.863431    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:11:12.880338    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:11:12.880351    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:11:12.892102    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:11:12.892112    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:11:12.903996    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:11:12.904008    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:11:12.915328    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:12.915336    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:16.554106    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:15.441565    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:21.556457    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:21.556734    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:21.590640    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:11:21.590773    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:21.609627    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:11:21.609722    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:21.624191    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:11:21.624272    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:21.638666    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:11:21.638740    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:21.649282    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:11:21.649358    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:21.660147    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:11:21.660220    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:21.670709    4699 logs.go:276] 0 containers: []
	W0802 11:11:21.670721    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:21.670776    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:21.682194    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:11:21.682212    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:11:21.682217    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:11:21.696636    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:11:21.696649    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:11:21.712862    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:21.712875    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:21.736085    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:21.736092    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:21.771320    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:11:21.771332    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:11:21.810160    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:11:21.810170    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:11:21.821654    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:11:21.821666    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:11:21.833378    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:11:21.833389    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:11:21.845211    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:11:21.845222    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:21.857419    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:11:21.857432    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:11:21.872166    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:11:21.872176    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:11:21.887958    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:11:21.887968    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:11:21.905576    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:11:21.905585    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:11:21.919598    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:21.919608    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:21.957612    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:21.957624    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:21.961827    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:11:21.961835    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:11:21.973387    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:11:21.973397    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:11:24.486957    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:20.443745    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:20.443873    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:20.456356    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:11:20.456432    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:20.467607    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:11:20.467671    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:20.478048    4562 logs.go:276] 2 containers: [1fbb8e62e165 e2699333b635]
	I0802 11:11:20.478120    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:20.488881    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:11:20.488947    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:20.500903    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:11:20.500978    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:20.511302    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:11:20.511368    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:20.521987    4562 logs.go:276] 0 containers: []
	W0802 11:11:20.522003    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:20.522064    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:20.534761    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:11:20.534775    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:11:20.534782    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:20.545987    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:11:20.546000    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:11:20.563970    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:11:20.563980    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:11:20.577433    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:11:20.577446    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:11:20.589210    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:20.589224    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:20.614846    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:11:20.614854    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:11:20.629888    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:11:20.629899    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:11:20.642141    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:11:20.642151    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:11:20.663276    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:11:20.663286    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:11:20.674671    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:20.674681    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:20.709257    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:20.709265    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:20.714123    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:20.714131    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:20.748997    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:11:20.749007    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:11:23.263316    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:29.487237    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:29.487396    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:29.503956    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:11:29.504054    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:29.531384    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:11:29.531479    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:29.543239    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:11:29.543313    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:29.553873    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:11:29.553946    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:29.565081    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:11:29.565146    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:29.575492    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:11:29.575569    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:29.585597    4699 logs.go:276] 0 containers: []
	W0802 11:11:29.585609    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:29.585665    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:29.596949    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:11:29.596966    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:29.596972    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:29.634346    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:11:29.634357    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:11:29.650729    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:11:29.650743    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:11:29.665004    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:11:29.665017    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:11:29.676255    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:11:29.676267    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:11:29.698411    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:11:29.698425    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:11:29.710571    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:11:29.710586    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:11:29.722408    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:11:29.722423    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:29.736852    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:29.736862    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:29.741140    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:11:29.741146    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:11:29.781815    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:11:29.781826    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:11:29.793577    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:11:29.793589    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:11:29.811250    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:11:29.811264    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:11:29.828466    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:11:29.828480    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:11:29.846193    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:29.846209    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:28.265774    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:28.266004    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:28.287107    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:11:28.287205    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:28.304320    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:11:28.304399    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:28.316029    4562 logs.go:276] 2 containers: [1fbb8e62e165 e2699333b635]
	I0802 11:11:28.316102    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:28.326929    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:11:28.326996    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:28.337467    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:11:28.337544    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:28.348219    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:11:28.348295    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:28.359244    4562 logs.go:276] 0 containers: []
	W0802 11:11:28.359256    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:28.359318    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:28.370207    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:11:28.370222    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:11:28.370228    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:28.381542    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:28.381555    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:28.415402    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:11:28.415412    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:11:28.431365    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:11:28.431375    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:11:28.447172    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:11:28.447182    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:11:28.458663    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:11:28.458674    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:11:28.470083    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:11:28.470094    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:11:28.487001    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:28.487016    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:28.510184    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:28.510195    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:28.514821    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:28.514828    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:28.549658    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:11:28.549672    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:11:28.560893    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:11:28.560903    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:11:28.579018    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:11:28.579029    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:11:29.871026    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:29.871035    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:29.907175    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:11:29.907186    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:11:32.423129    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:31.092584    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:37.425192    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:37.425311    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:37.436491    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:11:37.436565    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:37.446981    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:11:37.447047    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:37.457196    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:11:37.457258    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:37.467708    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:11:37.467782    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:37.478847    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:11:37.478921    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:37.493471    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:11:37.493534    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:37.503702    4699 logs.go:276] 0 containers: []
	W0802 11:11:37.503712    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:37.503772    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:37.518256    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:11:37.518273    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:11:37.518278    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:11:37.532888    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:11:37.532899    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:11:37.550157    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:37.550166    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:37.574120    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:11:37.574129    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:11:37.585737    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:11:37.585752    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:11:37.599970    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:37.599981    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:37.639460    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:11:37.639475    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:11:37.653070    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:11:37.653083    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:11:37.689426    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:11:37.689438    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:11:37.701456    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:11:37.701468    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:11:37.715222    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:11:37.715236    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:37.727193    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:37.727207    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:37.731357    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:37.731364    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:37.764815    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:11:37.764828    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:11:37.776358    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:11:37.776369    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:11:37.792341    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:11:37.792353    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:11:37.807861    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:11:37.807872    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:11:36.094775    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:36.095086    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:36.117993    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:11:36.118131    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:36.134091    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:11:36.134165    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:36.151371    4562 logs.go:276] 2 containers: [1fbb8e62e165 e2699333b635]
	I0802 11:11:36.151435    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:36.162130    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:11:36.162198    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:36.173559    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:11:36.173630    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:36.184832    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:11:36.184900    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:36.195788    4562 logs.go:276] 0 containers: []
	W0802 11:11:36.195800    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:36.195860    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:36.211336    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:11:36.211351    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:36.211357    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:36.244097    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:11:36.244105    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:11:36.258678    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:11:36.258688    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:11:36.271017    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:11:36.271031    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:11:36.298442    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:11:36.298453    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:36.310935    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:36.310951    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:36.315368    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:36.315377    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:36.354168    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:11:36.354181    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:11:36.369264    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:11:36.369278    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:11:36.382125    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:11:36.382137    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:11:36.401239    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:11:36.401250    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:11:36.417300    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:11:36.417313    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:11:36.431213    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:36.431227    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:38.957803    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:40.321790    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:43.960172    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:43.960337    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:43.974239    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:11:43.974318    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:43.986606    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:11:43.986669    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:43.997917    4562 logs.go:276] 2 containers: [1fbb8e62e165 e2699333b635]
	I0802 11:11:43.997991    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:44.009286    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:11:44.009347    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:44.019767    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:11:44.019837    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:44.030691    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:11:44.030759    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:44.041563    4562 logs.go:276] 0 containers: []
	W0802 11:11:44.041574    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:44.041634    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:44.052328    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:11:44.052344    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:11:44.052349    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:11:44.068169    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:11:44.068179    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:11:44.092413    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:44.092424    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:44.117428    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:44.117437    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:44.152246    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:44.152258    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:44.191897    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:11:44.191910    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:11:44.207127    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:11:44.207140    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:11:44.219537    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:11:44.219550    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:11:44.231641    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:11:44.231653    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:11:44.255252    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:11:44.255262    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:44.268523    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:44.268532    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:44.273111    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:11:44.273118    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:11:44.289434    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:11:44.289444    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:11:45.324097    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:45.324272    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:45.342673    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:11:45.342766    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:45.362261    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:11:45.362335    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:45.373650    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:11:45.373723    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:45.387586    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:11:45.387653    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:45.398724    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:11:45.398794    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:45.408912    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:11:45.409006    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:45.418808    4699 logs.go:276] 0 containers: []
	W0802 11:11:45.418818    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:45.418873    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:45.429723    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:11:45.429744    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:45.429750    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:45.433926    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:11:45.433935    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:11:45.445469    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:11:45.445479    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:11:45.457738    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:45.457752    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:45.481282    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:11:45.481289    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:11:45.518649    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:11:45.518662    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:11:45.532823    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:11:45.532834    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:11:45.548507    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:11:45.548519    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:45.560407    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:45.560418    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:45.599073    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:11:45.599083    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:11:45.610194    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:11:45.610206    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:11:45.621546    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:11:45.621558    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:11:45.639431    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:45.639444    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:45.677923    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:11:45.677936    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:11:45.694794    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:11:45.694805    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:11:45.712774    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:11:45.712785    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:11:45.724166    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:11:45.724176    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:11:48.241700    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:46.802408    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:53.243888    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:53.243991    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:53.255166    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:11:53.255241    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:53.265971    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:11:53.266052    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:53.276918    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:11:53.276988    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:53.287820    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:11:53.287893    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:53.298961    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:11:53.299031    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:53.309494    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:11:53.309557    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:53.319843    4699 logs.go:276] 0 containers: []
	W0802 11:11:53.319858    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:53.319921    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:53.331020    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:11:53.331038    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:11:53.331044    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:11:53.345037    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:11:53.345051    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:11:53.384435    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:11:53.384445    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:11:53.395795    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:11:53.395808    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:53.408029    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:53.408040    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:53.445293    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:11:53.445302    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:11:53.459645    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:11:53.459658    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:11:53.474474    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:11:53.474485    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:11:53.493197    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:11:53.493208    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:11:53.505592    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:11:53.505605    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:11:53.523535    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:11:53.523549    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:11:53.535475    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:11:53.535488    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:11:53.555063    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:53.555078    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:53.591702    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:11:53.591717    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:11:53.602768    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:11:53.602780    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:11:53.616540    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:53.616553    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:53.642022    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:53.642033    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:51.804607    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:51.804791    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:51.818272    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:11:51.818348    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:51.829774    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:11:51.829837    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:51.840811    4562 logs.go:276] 2 containers: [1fbb8e62e165 e2699333b635]
	I0802 11:11:51.840874    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:51.852122    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:11:51.852194    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:51.863366    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:11:51.863435    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:51.874414    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:11:51.874477    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:51.886787    4562 logs.go:276] 0 containers: []
	W0802 11:11:51.886800    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:51.886858    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:51.898371    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:11:51.898384    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:51.898390    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:51.933689    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:11:51.933700    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:11:51.949061    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:11:51.949074    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:11:51.964486    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:11:51.964499    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:11:51.976753    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:11:51.976765    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:11:51.994258    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:11:51.994270    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:11:52.006140    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:52.006152    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:52.029155    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:52.029163    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:52.033751    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:11:52.033757    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:11:52.048233    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:11:52.048245    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:11:52.060447    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:11:52.060459    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:11:52.072570    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:11:52.072583    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:52.084545    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:52.084558    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:54.618694    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:56.148251    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:59.620817    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:59.621021    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:59.640205    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:11:59.640279    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:59.653535    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:11:59.653612    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:59.665587    4562 logs.go:276] 2 containers: [1fbb8e62e165 e2699333b635]
	I0802 11:11:59.665649    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:59.677593    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:11:59.677666    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:59.688267    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:11:59.688333    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:59.699311    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:11:59.699372    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:59.710252    4562 logs.go:276] 0 containers: []
	W0802 11:11:59.710261    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:59.710322    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:59.721835    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:11:59.721855    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:59.721860    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:59.755147    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:59.755155    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:59.759353    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:11:59.759362    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:11:59.773803    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:11:59.773818    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:11:59.787133    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:11:59.787143    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:11:59.803159    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:59.803171    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:59.827861    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:11:59.827870    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:59.840101    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:59.840111    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:59.876589    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:11:59.876603    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:11:59.891942    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:11:59.891952    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:11:59.904700    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:11:59.904710    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:11:59.917412    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:11:59.917426    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:11:59.936170    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:11:59.936180    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:12:01.150711    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:01.151183    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:01.190711    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:12:01.190863    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:01.218151    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:12:01.218252    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:01.232901    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:12:01.232983    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:01.244850    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:12:01.244926    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:01.255324    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:12:01.255395    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:01.266012    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:12:01.266084    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:01.276303    4699 logs.go:276] 0 containers: []
	W0802 11:12:01.276318    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:01.276381    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:01.287455    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:12:01.287475    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:01.287480    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:01.325887    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:01.325896    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:01.329966    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:01.329972    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:01.364431    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:12:01.364442    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:12:01.379072    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:01.379083    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:01.402874    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:12:01.402882    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:12:01.414780    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:12:01.414793    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:12:01.430495    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:12:01.430505    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:12:01.447857    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:12:01.447870    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:12:01.485204    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:12:01.485214    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:12:01.499272    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:12:01.499283    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:12:01.511531    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:12:01.511544    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:12:01.522966    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:12:01.522981    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:01.536530    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:12:01.536539    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:12:01.553190    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:12:01.553201    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:12:01.567725    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:12:01.567735    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:12:01.581840    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:12:01.581850    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:12:04.095361    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:02.450120    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:09.097966    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:09.098368    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:09.139585    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:12:09.139726    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:09.158897    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:12:09.159015    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:09.174115    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:12:09.174196    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:09.187029    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:12:09.187097    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:09.198045    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:12:09.198115    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:09.214414    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:12:09.214486    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:09.224496    4699 logs.go:276] 0 containers: []
	W0802 11:12:09.224507    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:09.224560    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:09.235719    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:12:09.235736    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:12:09.235741    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:12:09.247980    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:12:09.247990    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:12:09.260165    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:09.260176    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:09.284438    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:12:09.284444    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:12:09.296955    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:12:09.296968    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:12:09.313421    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:12:09.313435    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:12:09.352599    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:12:09.352615    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:12:09.364941    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:09.364955    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:09.400001    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:12:09.400012    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:12:09.417130    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:12:09.417141    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:12:09.433401    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:12:09.433413    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:12:09.450998    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:12:09.451013    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:12:09.464402    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:09.464413    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:09.502726    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:12:09.502732    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:12:09.517062    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:12:09.517073    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:12:09.528921    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:12:09.528933    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:09.543373    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:09.543385    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:07.452229    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:07.452409    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:07.469722    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:12:07.469807    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:07.484741    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:12:07.484811    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:07.496191    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:12:07.496264    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:07.507259    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:12:07.507329    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:07.518795    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:12:07.518866    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:07.529823    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:12:07.529888    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:07.540955    4562 logs.go:276] 0 containers: []
	W0802 11:12:07.540967    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:07.541029    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:07.552409    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:12:07.552428    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:07.552434    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:07.588341    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:12:07.588353    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:12:07.603216    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:12:07.603231    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:12:07.614856    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:12:07.614867    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:12:07.629230    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:12:07.629243    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:12:07.647246    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:07.647255    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:07.652364    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:12:07.652373    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:12:07.667778    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:07.667788    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:07.702553    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:12:07.702561    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:12:07.716765    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:12:07.716775    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:12:07.728174    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:12:07.728188    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:12:07.739858    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:12:07.739867    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:12:07.751988    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:07.751997    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:07.777295    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:12:07.777303    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:12:07.791357    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:12:07.791368    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:12.049669    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:10.307891    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:17.051751    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:17.051999    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:17.075137    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:12:17.075267    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:17.091570    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:12:17.091650    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:17.104997    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:12:17.105082    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:17.116219    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:12:17.116290    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:17.127188    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:12:17.127256    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:17.137933    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:12:17.138006    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:17.148109    4699 logs.go:276] 0 containers: []
	W0802 11:12:17.148126    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:17.148182    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:17.158883    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:12:17.158900    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:12:17.158906    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:12:17.171118    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:12:17.171131    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:12:17.182573    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:12:17.182587    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:17.196161    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:12:17.196174    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:12:17.236353    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:12:17.236369    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:12:17.248683    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:12:17.248693    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:12:17.260350    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:17.260362    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:17.283690    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:17.283700    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:17.322366    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:12:17.322382    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:12:17.338061    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:12:17.338083    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:12:17.360081    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:12:17.360097    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:12:17.372370    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:17.372381    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:17.377015    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:17.377022    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:17.413628    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:12:17.413640    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:12:17.428120    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:12:17.428131    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:12:17.441819    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:12:17.441830    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:12:17.457460    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:12:17.457471    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:12:15.309995    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:15.310183    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:15.328908    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:12:15.328995    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:15.343821    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:12:15.343902    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:15.355812    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:12:15.355886    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:15.366960    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:12:15.367032    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:15.377920    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:12:15.377993    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:15.389920    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:12:15.389994    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:15.400681    4562 logs.go:276] 0 containers: []
	W0802 11:12:15.400694    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:15.400755    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:15.411202    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:12:15.411221    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:12:15.411225    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:12:15.422685    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:12:15.422698    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:12:15.437214    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:15.437224    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:15.460486    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:12:15.460496    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:12:15.474933    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:12:15.474944    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:12:15.486423    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:12:15.486438    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:12:15.504215    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:15.504225    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:15.538797    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:12:15.538811    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:12:15.550481    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:12:15.550491    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:12:15.561937    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:12:15.561948    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:12:15.573158    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:15.573169    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:15.606772    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:15.606780    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:15.611342    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:12:15.611350    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:15.622888    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:12:15.622902    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:12:15.638432    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:12:15.638442    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:12:18.155276    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:19.973015    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:23.157071    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:23.157296    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:23.182554    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:12:23.182658    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:23.197660    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:12:23.197740    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:23.212072    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:12:23.212145    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:23.223508    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:12:23.223582    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:23.234154    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:12:23.234215    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:23.244455    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:12:23.244517    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:23.254625    4562 logs.go:276] 0 containers: []
	W0802 11:12:23.254639    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:23.254704    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:23.264965    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:12:23.264982    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:12:23.264987    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:12:23.276699    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:12:23.276707    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:12:23.294960    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:12:23.294971    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:23.307451    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:12:23.307465    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:12:23.319183    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:12:23.319196    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:12:23.338437    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:23.338447    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:23.373890    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:23.373907    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:23.410177    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:12:23.410188    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:12:23.425401    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:12:23.425412    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:12:23.436530    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:23.436542    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:23.459736    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:12:23.459745    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:12:23.478004    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:12:23.478014    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:12:23.489355    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:23.489365    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:23.493736    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:12:23.493742    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:12:23.507440    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:12:23.507448    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:12:24.975254    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:24.975437    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:24.999821    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:12:24.999933    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:25.016692    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:12:25.016782    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:25.029405    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:12:25.029477    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:25.040363    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:12:25.040431    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:25.051180    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:12:25.051237    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:25.061720    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:12:25.061791    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:25.072263    4699 logs.go:276] 0 containers: []
	W0802 11:12:25.072276    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:25.072330    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:25.082945    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:12:25.082963    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:12:25.082968    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:12:25.094672    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:12:25.094683    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:12:25.105609    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:12:25.105621    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:12:25.119690    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:12:25.119700    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:12:25.131723    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:12:25.131735    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:12:25.143276    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:25.143287    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:25.167402    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:12:25.167408    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:25.179225    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:25.179239    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:25.218465    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:12:25.218475    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:12:25.232626    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:12:25.232636    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:12:25.270414    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:12:25.270425    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:12:25.285702    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:12:25.285714    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:12:25.299184    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:12:25.299197    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:12:25.317028    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:25.317045    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:25.321123    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:25.321130    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:25.356680    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:12:25.356694    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:12:25.370699    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:12:25.370709    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:12:27.888220    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:26.021402    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:32.890428    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:32.890588    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:32.901197    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:12:32.901269    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:32.911984    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:12:32.912055    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:32.922487    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:12:32.922560    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:32.932762    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:12:32.932841    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:32.943451    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:12:32.943520    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:32.954311    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:12:32.954375    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:32.964571    4699 logs.go:276] 0 containers: []
	W0802 11:12:32.964585    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:32.964648    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:32.975391    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:12:32.975412    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:32.975417    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:33.012595    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:12:33.012602    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:12:33.048972    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:12:33.048982    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:12:33.060858    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:12:33.060869    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:12:33.072485    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:12:33.072498    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:12:33.086382    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:12:33.086394    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:33.099073    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:33.099084    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:33.138505    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:12:33.138517    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:12:33.152352    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:12:33.152362    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:12:33.166595    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:12:33.166606    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:12:33.181703    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:12:33.181715    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:12:33.192929    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:12:33.192941    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:12:33.208087    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:12:33.208099    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:12:33.225337    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:12:33.225349    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:12:33.236560    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:12:33.236569    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:12:33.247672    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:33.247684    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:33.270096    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:33.270103    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:31.023924    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:31.024145    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:31.052889    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:12:31.052988    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:31.068246    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:12:31.068315    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:31.081162    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:12:31.081242    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:31.094527    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:12:31.094598    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:31.104918    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:12:31.104988    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:31.115238    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:12:31.115299    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:31.125484    4562 logs.go:276] 0 containers: []
	W0802 11:12:31.125498    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:31.125558    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:31.135933    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:12:31.135949    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:12:31.135953    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:12:31.147840    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:12:31.147851    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:12:31.159717    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:12:31.159726    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:31.171319    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:31.171330    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:31.204505    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:31.204511    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:31.208891    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:12:31.208899    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:12:31.223757    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:12:31.223770    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:12:31.235348    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:31.235361    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:31.272405    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:12:31.272418    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:12:31.286455    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:12:31.286466    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:12:31.298578    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:12:31.298588    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:12:31.310582    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:12:31.310592    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:12:31.328754    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:31.328766    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:31.354374    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:12:31.354385    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:12:31.370010    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:12:31.370023    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:12:33.886383    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:35.775930    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:38.888501    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:38.888721    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:38.905417    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:12:38.905509    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:38.918479    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:12:38.918553    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:38.929537    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:12:38.929613    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:38.939889    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:12:38.939953    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:38.950027    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:12:38.950090    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:38.960516    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:12:38.960585    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:38.970926    4562 logs.go:276] 0 containers: []
	W0802 11:12:38.970937    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:38.970994    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:38.981316    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:12:38.981333    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:38.981339    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:39.016375    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:12:39.016387    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:12:39.027857    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:12:39.027868    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:12:39.040074    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:12:39.040085    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:12:39.057623    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:12:39.057635    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:12:39.069050    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:39.069062    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:39.073377    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:12:39.073386    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:39.084448    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:12:39.084460    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:12:39.099577    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:12:39.099587    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:12:39.114253    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:39.114267    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:39.140267    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:39.140275    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:39.174256    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:12:39.174263    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:12:39.188292    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:12:39.188300    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:12:39.203521    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:12:39.203529    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:12:39.215782    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:12:39.215793    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:12:40.778123    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:40.778312    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:40.799767    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:12:40.799845    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:40.812313    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:12:40.812388    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:40.823102    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:12:40.823168    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:40.834015    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:12:40.834082    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:40.844596    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:12:40.844665    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:40.856137    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:12:40.856199    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:40.866623    4699 logs.go:276] 0 containers: []
	W0802 11:12:40.866633    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:40.866684    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:40.876886    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:12:40.876903    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:12:40.876909    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:12:40.914788    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:12:40.914800    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:12:40.928862    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:12:40.928871    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:12:40.942623    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:12:40.942632    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:12:40.954021    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:40.954031    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:40.990302    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:12:40.990310    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:12:41.001756    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:12:41.001767    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:12:41.013406    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:41.013417    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:41.036245    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:12:41.036253    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:41.047652    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:41.047664    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:41.052114    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:12:41.052121    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:12:41.066305    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:12:41.066318    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:12:41.077718    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:12:41.077731    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:12:41.092802    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:41.092813    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:41.126393    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:12:41.126403    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:12:41.140026    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:12:41.140039    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:12:41.151133    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:12:41.151143    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:12:43.675797    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:41.729607    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:48.676797    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:48.676981    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:48.691453    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:12:48.691537    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:48.702637    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:12:48.702706    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:48.717624    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:12:48.717694    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:48.732858    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:12:48.732930    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:48.743161    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:12:48.743240    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:48.753647    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:12:48.753713    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:48.763628    4699 logs.go:276] 0 containers: []
	W0802 11:12:48.763644    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:48.763707    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:48.779759    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:12:48.779778    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:12:48.779784    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:12:48.795333    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:12:48.795344    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:12:48.811449    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:12:48.811463    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:12:48.829433    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:48.829445    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:48.834230    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:48.834238    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:48.868657    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:12:48.868671    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:12:48.906000    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:12:48.906011    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:12:48.919802    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:12:48.919813    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:12:48.937589    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:48.937600    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:48.959473    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:48.959481    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:48.996023    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:12:48.996032    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:12:49.007296    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:12:49.007307    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:12:49.021215    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:12:49.021225    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:12:49.032252    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:12:49.032263    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:49.043761    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:12:49.043773    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:12:49.059101    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:12:49.059110    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:12:49.070987    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:12:49.070997    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:12:46.731521    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:46.731772    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:46.758182    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:12:46.758303    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:46.776717    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:12:46.776812    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:46.790603    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:12:46.790678    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:46.802702    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:12:46.802769    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:46.813161    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:12:46.813232    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:46.824623    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:12:46.824697    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:46.836415    4562 logs.go:276] 0 containers: []
	W0802 11:12:46.836426    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:46.836491    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:46.849794    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:12:46.849812    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:12:46.849818    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:12:46.861514    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:12:46.861530    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:12:46.878789    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:46.878799    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:46.914063    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:46.914075    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:46.981096    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:12:46.981106    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:12:46.995884    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:12:46.995895    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:12:47.015097    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:12:47.015109    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:12:47.027753    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:47.027764    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:47.032097    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:12:47.032105    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:12:47.044009    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:12:47.044019    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:12:47.055091    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:12:47.055102    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:12:47.067281    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:12:47.067293    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:12:47.086286    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:12:47.086296    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:47.100358    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:12:47.100367    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:12:47.112118    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:47.112129    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:49.639028    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:51.585091    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:54.641166    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:54.641323    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:54.652233    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:12:54.652309    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:54.663067    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:12:54.663137    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:54.673476    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:12:54.673547    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:54.684079    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:12:54.684146    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:54.694607    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:12:54.694675    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:54.705375    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:12:54.705442    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:54.716428    4562 logs.go:276] 0 containers: []
	W0802 11:12:54.716443    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:54.716510    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:54.726265    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:12:54.726282    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:12:54.726288    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:12:54.743858    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:54.743870    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:54.767289    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:54.767296    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:54.771650    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:12:54.771659    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:12:54.784130    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:12:54.784140    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:12:54.796401    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:12:54.796414    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:12:54.808116    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:12:54.808132    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:54.819315    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:12:54.819324    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:12:54.831119    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:12:54.831130    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:12:54.845881    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:54.845890    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:54.883042    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:12:54.883054    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:12:54.897703    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:12:54.897717    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:12:54.909125    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:54.909136    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:54.943578    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:12:54.943585    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:12:54.957305    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:12:54.957316    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:12:56.587688    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:56.587931    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:56.607205    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:12:56.607297    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:56.622205    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:12:56.622278    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:56.634313    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:12:56.634390    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:56.645395    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:12:56.645466    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:56.656360    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:12:56.656434    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:56.667219    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:12:56.667286    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:56.677611    4699 logs.go:276] 0 containers: []
	W0802 11:12:56.677622    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:56.677683    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:56.687948    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:12:56.687963    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:12:56.687968    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:12:56.702937    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:12:56.702946    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:56.715046    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:12:56.715057    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:12:56.732888    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:56.732898    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:56.756385    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:12:56.756392    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:12:56.769910    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:12:56.769923    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:12:56.781415    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:56.781426    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:56.818887    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:12:56.818899    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:12:56.857954    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:12:56.857981    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:12:56.892977    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:12:56.892989    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:12:56.909367    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:12:56.909379    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:12:56.924516    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:12:56.924527    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:12:56.942088    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:56.942098    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:56.979600    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:56.979611    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:56.984006    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:12:56.984012    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:12:56.997625    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:12:56.997641    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:12:57.009771    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:12:57.009784    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:12:59.523727    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:57.470347    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:04.526187    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:04.526415    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:04.545505    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:13:04.545605    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:04.559176    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:13:04.559256    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:04.571076    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:13:04.571158    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:04.600618    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:13:04.600712    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:04.616571    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:13:04.616641    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:04.634026    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:13:04.634109    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:04.645226    4699 logs.go:276] 0 containers: []
	W0802 11:13:04.645239    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:04.645310    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:04.659262    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:13:04.659282    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:13:04.659288    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:13:04.673419    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:13:04.673430    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:13:04.688351    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:13:04.688363    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:13:04.699621    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:13:04.699633    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:13:04.712105    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:13:04.712116    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:13:04.727365    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:04.727374    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:04.766569    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:04.766577    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:04.770835    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:13:04.770842    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:13:04.810322    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:13:04.810332    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:13:04.828020    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:13:04.828035    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:13:04.839287    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:13:04.839298    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:04.851194    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:13:04.851206    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:13:02.472508    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:02.472626    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:02.484751    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:13:02.484831    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:02.495353    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:13:02.495419    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:02.509629    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:13:02.509724    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:02.520064    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:13:02.520134    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:02.530847    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:13:02.530916    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:02.541419    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:13:02.541482    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:02.551786    4562 logs.go:276] 0 containers: []
	W0802 11:13:02.551799    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:02.551853    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:02.562020    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:13:02.562036    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:02.562041    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:02.567003    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:13:02.567020    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:13:02.578654    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:13:02.578668    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:02.590754    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:02.590766    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:02.627194    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:02.627207    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:02.682336    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:13:02.682349    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:13:02.694420    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:13:02.694433    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:13:02.713700    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:13:02.713712    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:13:02.731391    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:13:02.731402    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:13:02.746321    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:13:02.746332    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:13:02.758952    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:13:02.758966    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:13:02.770696    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:13:02.770706    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:13:02.782935    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:13:02.782947    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:13:02.795492    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:02.795502    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:02.821563    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:13:02.821575    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:13:04.865252    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:13:04.865262    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:13:04.876391    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:04.876402    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:04.914375    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:13:04.914385    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:13:04.932816    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:13:04.932826    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:13:04.945759    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:04.945769    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:07.470844    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:05.338000    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:12.473313    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:12.473425    4699 kubeadm.go:597] duration metric: took 4m3.983610292s to restartPrimaryControlPlane
	W0802 11:13:12.473469    4699 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0802 11:13:12.473487    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0802 11:13:13.500225    4699 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.026761583s)
	I0802 11:13:13.500298    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 11:13:13.505354    4699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 11:13:13.508128    4699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 11:13:13.510985    4699 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 11:13:13.510991    4699 kubeadm.go:157] found existing configuration files:
	
	I0802 11:13:13.511014    4699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf
	I0802 11:13:13.514099    4699 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 11:13:13.514120    4699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 11:13:13.517323    4699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf
	I0802 11:13:13.520003    4699 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 11:13:13.520023    4699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 11:13:13.522785    4699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf
	I0802 11:13:13.526250    4699 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 11:13:13.526273    4699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 11:13:13.529414    4699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf
	I0802 11:13:13.532079    4699 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 11:13:13.532096    4699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 11:13:13.534775    4699 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 11:13:13.552647    4699 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0802 11:13:13.552723    4699 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 11:13:13.599546    4699 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 11:13:13.599644    4699 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 11:13:13.599715    4699 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 11:13:13.648011    4699 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 11:13:13.653213    4699 out.go:204]   - Generating certificates and keys ...
	I0802 11:13:13.653250    4699 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 11:13:13.653342    4699 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 11:13:13.653379    4699 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0802 11:13:13.653418    4699 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0802 11:13:13.653475    4699 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0802 11:13:13.653502    4699 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0802 11:13:13.653535    4699 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0802 11:13:13.653567    4699 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0802 11:13:13.653608    4699 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0802 11:13:13.653652    4699 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0802 11:13:13.653678    4699 kubeadm.go:310] [certs] Using the existing "sa" key
	I0802 11:13:13.653715    4699 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 11:13:13.864034    4699 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 11:13:13.958642    4699 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 11:13:14.085525    4699 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 11:13:14.136663    4699 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 11:13:14.165404    4699 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 11:13:14.165451    4699 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 11:13:14.165473    4699 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 11:13:14.256324    4699 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 11:13:14.259979    4699 out.go:204]   - Booting up control plane ...
	I0802 11:13:14.260025    4699 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 11:13:14.260062    4699 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 11:13:14.260096    4699 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 11:13:14.260138    4699 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 11:13:14.260236    4699 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0802 11:13:10.340138    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:10.340324    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:10.355626    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:13:10.355709    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:10.366477    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:13:10.366556    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:10.377713    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:13:10.377795    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:10.388295    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:13:10.388365    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:10.398343    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:13:10.398413    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:10.408879    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:13:10.408945    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:10.421005    4562 logs.go:276] 0 containers: []
	W0802 11:13:10.421016    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:10.421080    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:10.431607    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:13:10.431625    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:13:10.431630    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:13:10.443548    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:13:10.443560    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:13:10.455042    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:13:10.455053    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:13:10.466485    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:13:10.466499    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:10.478288    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:10.478299    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:10.482985    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:13:10.482995    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:13:10.496380    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:13:10.496393    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:13:10.509451    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:10.509461    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:10.534733    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:10.534741    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:10.568939    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:13:10.568948    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:13:10.583046    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:13:10.583058    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:13:10.595397    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:13:10.595406    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:13:10.612782    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:10.612792    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:10.651809    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:13:10.651820    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:13:10.665056    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:13:10.665067    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:13:13.184737    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:18.260056    4699 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001263 seconds
	I0802 11:13:18.260137    4699 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 11:13:18.264643    4699 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 11:13:18.775413    4699 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 11:13:18.775531    4699 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-387000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 11:13:19.281408    4699 kubeadm.go:310] [bootstrap-token] Using token: 2w8ki8.s5djwx0dmusw95zk
	I0802 11:13:19.287821    4699 out.go:204]   - Configuring RBAC rules ...
	I0802 11:13:19.287876    4699 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 11:13:19.287921    4699 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 11:13:19.294439    4699 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 11:13:19.295552    4699 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 11:13:19.296666    4699 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 11:13:19.297759    4699 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 11:13:19.301385    4699 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 11:13:19.477277    4699 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 11:13:19.685878    4699 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 11:13:19.686409    4699 kubeadm.go:310] 
	I0802 11:13:19.686441    4699 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 11:13:19.686444    4699 kubeadm.go:310] 
	I0802 11:13:19.686488    4699 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 11:13:19.686494    4699 kubeadm.go:310] 
	I0802 11:13:19.686506    4699 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 11:13:19.686536    4699 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 11:13:19.686567    4699 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 11:13:19.686569    4699 kubeadm.go:310] 
	I0802 11:13:19.686595    4699 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 11:13:19.686597    4699 kubeadm.go:310] 
	I0802 11:13:19.686651    4699 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 11:13:19.686670    4699 kubeadm.go:310] 
	I0802 11:13:19.686718    4699 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 11:13:19.686759    4699 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 11:13:19.686822    4699 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 11:13:19.686826    4699 kubeadm.go:310] 
	I0802 11:13:19.686867    4699 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 11:13:19.686905    4699 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 11:13:19.686907    4699 kubeadm.go:310] 
	I0802 11:13:19.686947    4699 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2w8ki8.s5djwx0dmusw95zk \
	I0802 11:13:19.686998    4699 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f9320a40b5936daeb22249c1a98fe573be47e358012961e7ff0a8e7d01ac6b4d \
	I0802 11:13:19.687008    4699 kubeadm.go:310] 	--control-plane 
	I0802 11:13:19.687011    4699 kubeadm.go:310] 
	I0802 11:13:19.687061    4699 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 11:13:19.687064    4699 kubeadm.go:310] 
	I0802 11:13:19.687109    4699 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2w8ki8.s5djwx0dmusw95zk \
	I0802 11:13:19.687187    4699 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f9320a40b5936daeb22249c1a98fe573be47e358012961e7ff0a8e7d01ac6b4d 
	I0802 11:13:19.687302    4699 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 11:13:19.687351    4699 cni.go:84] Creating CNI manager for ""
	I0802 11:13:19.687360    4699 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:13:19.691622    4699 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 11:13:19.699669    4699 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 11:13:19.702714    4699 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 11:13:19.708737    4699 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 11:13:19.708802    4699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 11:13:19.708803    4699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-387000 minikube.k8s.io/updated_at=2024_08_02T11_13_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=stopped-upgrade-387000 minikube.k8s.io/primary=true
	I0802 11:13:19.745656    4699 kubeadm.go:1113] duration metric: took 36.899834ms to wait for elevateKubeSystemPrivileges
	I0802 11:13:19.745669    4699 ops.go:34] apiserver oom_adj: -16
	I0802 11:13:19.745675    4699 kubeadm.go:394] duration metric: took 4m11.269257084s to StartCluster
	I0802 11:13:19.745685    4699 settings.go:142] acquiring lock: {Name:mke9d9a6b3c42219545f5aed5860e740f1b28aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:13:19.745780    4699 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:13:19.746180    4699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/kubeconfig: {Name:mkee875f598bd0a8f78c04f09a48257e74d5dd54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:13:19.746378    4699 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:13:19.746409    4699 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 11:13:19.746462    4699 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-387000"
	I0802 11:13:19.746471    4699 config.go:182] Loaded profile config "stopped-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:13:19.746477    4699 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-387000"
	W0802 11:13:19.746481    4699 addons.go:243] addon storage-provisioner should already be in state true
	I0802 11:13:19.746480    4699 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-387000"
	I0802 11:13:19.746494    4699 host.go:66] Checking if "stopped-upgrade-387000" exists ...
	I0802 11:13:19.746534    4699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-387000"
	I0802 11:13:19.747773    4699 kapi.go:59] client config for stopped-upgrade-387000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/client.key", CAFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103e641b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0802 11:13:19.747894    4699 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-387000"
	W0802 11:13:19.747903    4699 addons.go:243] addon default-storageclass should already be in state true
	I0802 11:13:19.747909    4699 host.go:66] Checking if "stopped-upgrade-387000" exists ...
	I0802 11:13:19.750615    4699 out.go:177] * Verifying Kubernetes components...
	I0802 11:13:19.750991    4699 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 11:13:19.754707    4699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 11:13:19.754714    4699 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/id_rsa Username:docker}
	I0802 11:13:19.758566    4699 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:13:19.762637    4699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:13:19.766569    4699 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 11:13:19.766575    4699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 11:13:19.766581    4699 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/id_rsa Username:docker}
	I0802 11:13:19.844550    4699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 11:13:19.849537    4699 api_server.go:52] waiting for apiserver process to appear ...
	I0802 11:13:19.849578    4699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 11:13:19.853425    4699 api_server.go:72] duration metric: took 107.036916ms to wait for apiserver process to appear ...
	I0802 11:13:19.853433    4699 api_server.go:88] waiting for apiserver healthz status ...
	I0802 11:13:19.853439    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:18.185742    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:18.185849    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:18.197552    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:13:18.197627    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:18.209559    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:13:18.209637    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:18.227701    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:13:18.227774    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:18.243514    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:13:18.243586    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:18.254360    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:13:18.254432    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:18.266071    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:13:18.266140    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:18.276751    4562 logs.go:276] 0 containers: []
	W0802 11:13:18.276762    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:18.276830    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:18.290097    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:13:18.290114    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:18.290120    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:18.326153    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:18.326170    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:18.361743    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:13:18.361754    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:13:18.375936    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:13:18.375946    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:13:18.387334    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:13:18.387345    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:13:18.399696    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:13:18.399706    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:18.419195    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:13:18.419206    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:13:18.434984    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:13:18.434994    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:13:18.450136    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:13:18.450147    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:13:18.462313    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:13:18.462323    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:13:18.476639    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:13:18.476654    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:13:18.488770    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:13:18.488781    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:13:18.500961    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:13:18.500972    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:13:18.519649    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:18.519663    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:18.544431    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:18.544442    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:19.871198    4699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 11:13:19.890578    4699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 11:13:21.050785    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:24.854766    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:24.854789    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:26.053294    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:26.053742    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:26.094168    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:13:26.094307    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:26.115591    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:13:26.115685    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:26.130215    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:13:26.130297    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:26.142461    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:13:26.142537    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:26.153829    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:13:26.153899    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:26.164849    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:13:26.164920    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:26.175182    4562 logs.go:276] 0 containers: []
	W0802 11:13:26.175193    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:26.175250    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:26.185487    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:13:26.185505    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:26.185510    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:26.219081    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:13:26.219089    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:13:26.233341    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:13:26.233354    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:13:26.245461    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:26.245475    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:26.250205    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:26.250212    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:26.286222    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:13:26.286237    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:13:26.298333    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:13:26.298345    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:13:26.310479    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:13:26.310493    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:26.322789    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:13:26.322800    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:13:26.336536    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:13:26.336546    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:13:26.351629    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:13:26.351641    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:13:26.367888    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:13:26.367899    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:13:26.382643    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:13:26.382658    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:13:26.404693    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:13:26.404707    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:13:26.423010    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:26.423024    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:28.947996    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:29.855165    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:29.855221    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:33.950111    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:33.950210    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:33.961889    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:13:33.961962    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:33.973092    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:13:33.973162    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:33.985364    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:13:33.985439    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:33.996900    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:13:33.996977    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:34.007829    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:13:34.007901    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:34.023614    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:13:34.023685    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:34.034667    4562 logs.go:276] 0 containers: []
	W0802 11:13:34.034677    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:34.034735    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:34.045787    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:13:34.045806    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:13:34.045812    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:13:34.059404    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:13:34.059417    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:13:34.078224    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:34.078235    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:34.083394    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:13:34.083407    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:13:34.095429    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:13:34.095440    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:34.107070    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:34.107084    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:34.146352    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:13:34.146366    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:13:34.158591    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:13:34.158602    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:13:34.173887    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:13:34.173900    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:13:34.187628    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:13:34.187641    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:13:34.199115    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:13:34.199128    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:13:34.215256    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:13:34.215270    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:13:34.226524    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:34.226537    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:34.249629    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:34.249638    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:34.283462    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:13:34.283472    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:13:34.855231    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:34.855250    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:36.799488    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:39.855380    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:39.855453    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:41.801526    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:41.801694    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:41.813776    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:13:41.813855    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:41.824653    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:13:41.824727    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:41.835500    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:13:41.835578    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:41.859784    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:13:41.859863    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:41.874735    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:13:41.874812    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:41.885486    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:13:41.885557    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:41.896317    4562 logs.go:276] 0 containers: []
	W0802 11:13:41.896328    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:41.896390    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:41.906772    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:13:41.906791    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:13:41.906796    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:13:41.920842    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:13:41.920855    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:13:41.933263    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:13:41.933274    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:13:41.945949    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:13:41.945958    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:13:41.957783    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:41.957798    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:41.962121    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:41.962131    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:41.996600    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:13:41.996614    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:13:42.017327    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:13:42.017339    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:13:42.029368    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:13:42.029379    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:13:42.040955    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:13:42.040967    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:13:42.056886    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:13:42.056897    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:13:42.068975    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:42.068984    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:42.092433    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:13:42.092442    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:42.104251    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:42.104265    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:42.139863    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:13:42.139873    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:13:44.656551    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:44.855883    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:44.855911    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:49.658622    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:49.658746    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:49.670172    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:13:49.670256    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:49.681527    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:13:49.681597    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:49.692106    4562 logs.go:276] 4 containers: [2ef39923a680 40a7e5e7fb55 1fbb8e62e165 e2699333b635]
	I0802 11:13:49.692206    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:49.702992    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:13:49.703059    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:49.713071    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:13:49.713142    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:49.724492    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:13:49.724565    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:49.734869    4562 logs.go:276] 0 containers: []
	W0802 11:13:49.734879    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:49.734942    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:49.747349    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:13:49.747366    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:13:49.747371    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:13:49.764818    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:13:49.764831    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:13:49.780379    4562 logs.go:123] Gathering logs for coredns [e2699333b635] ...
	I0802 11:13:49.780390    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2699333b635"
	I0802 11:13:49.792073    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:13:49.792084    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:13:49.803619    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:13:49.803629    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:49.815724    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:49.815739    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:49.849777    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:13:49.849785    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:13:49.868859    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:13:49.868868    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:13:49.887779    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:49.887789    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:49.892598    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:13:49.892604    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:13:49.904883    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:13:49.904894    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:13:49.916726    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:49.916737    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:49.940989    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:13:49.941003    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:13:49.856256    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:49.856276    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0802 11:13:50.214781    4699 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0802 11:13:50.218594    4699 out.go:177] * Enabled addons: storage-provisioner
	I0802 11:13:50.226595    4699 addons.go:510] duration metric: took 30.481269083s for enable addons: enabled=[storage-provisioner]
	I0802 11:13:49.959954    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:49.959965    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:50.001085    4562 logs.go:123] Gathering logs for coredns [1fbb8e62e165] ...
	I0802 11:13:50.001094    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fbb8e62e165"
	I0802 11:13:52.514818    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:54.856799    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:54.856845    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:57.516955    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:57.517154    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:57.533835    4562 logs.go:276] 1 containers: [7cd8d7696cfa]
	I0802 11:13:57.533927    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:57.546283    4562 logs.go:276] 1 containers: [bd9cc7f29d3b]
	I0802 11:13:57.546361    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:57.559988    4562 logs.go:276] 4 containers: [294ca712bac3 333afebe2486 2ef39923a680 40a7e5e7fb55]
	I0802 11:13:57.560059    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:57.570835    4562 logs.go:276] 1 containers: [bf1e759796bd]
	I0802 11:13:57.570908    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:57.581442    4562 logs.go:276] 1 containers: [a20bb040c8f3]
	I0802 11:13:57.581526    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:57.592132    4562 logs.go:276] 1 containers: [46d35b03bce7]
	I0802 11:13:57.592199    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:57.602951    4562 logs.go:276] 0 containers: []
	W0802 11:13:57.602962    4562 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:57.603021    4562 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:57.612947    4562 logs.go:276] 1 containers: [18e302f2e6de]
	I0802 11:13:57.612963    4562 logs.go:123] Gathering logs for container status ...
	I0802 11:13:57.612969    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:57.632499    4562 logs.go:123] Gathering logs for kube-controller-manager [46d35b03bce7] ...
	I0802 11:13:57.632510    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d35b03bce7"
	I0802 11:13:57.650519    4562 logs.go:123] Gathering logs for coredns [333afebe2486] ...
	I0802 11:13:57.650529    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333afebe2486"
	I0802 11:13:57.662394    4562 logs.go:123] Gathering logs for coredns [2ef39923a680] ...
	I0802 11:13:57.662405    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef39923a680"
	I0802 11:13:57.674757    4562 logs.go:123] Gathering logs for storage-provisioner [18e302f2e6de] ...
	I0802 11:13:57.674767    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18e302f2e6de"
	I0802 11:13:57.686482    4562 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:57.686494    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:57.722372    4562 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:57.722382    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:57.726746    4562 logs.go:123] Gathering logs for kube-apiserver [7cd8d7696cfa] ...
	I0802 11:13:57.726753    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cd8d7696cfa"
	I0802 11:13:57.741849    4562 logs.go:123] Gathering logs for etcd [bd9cc7f29d3b] ...
	I0802 11:13:57.741860    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9cc7f29d3b"
	I0802 11:13:57.756568    4562 logs.go:123] Gathering logs for kube-scheduler [bf1e759796bd] ...
	I0802 11:13:57.756582    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1e759796bd"
	I0802 11:13:57.772052    4562 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:57.772065    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:57.795849    4562 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:57.795869    4562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:57.829542    4562 logs.go:123] Gathering logs for coredns [40a7e5e7fb55] ...
	I0802 11:13:57.829551    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a7e5e7fb55"
	I0802 11:13:57.843309    4562 logs.go:123] Gathering logs for kube-proxy [a20bb040c8f3] ...
	I0802 11:13:57.843319    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a20bb040c8f3"
	I0802 11:13:57.855973    4562 logs.go:123] Gathering logs for coredns [294ca712bac3] ...
	I0802 11:13:57.855982    4562 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 294ca712bac3"
	I0802 11:13:59.857351    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:59.857389    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:14:00.371451    4562 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:14:05.373588    4562 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:14:05.378299    4562 out.go:177] 
	W0802 11:14:05.381186    4562 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0802 11:14:05.381196    4562 out.go:239] * 
	W0802 11:14:05.381962    4562 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:14:05.397105    4562 out.go:177] 
	I0802 11:14:04.858107    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:14:04.858131    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:14:09.859215    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:14:09.859260    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:14:14.859841    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:14:14.859864    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-08-02 18:04:56 UTC, ends at Fri 2024-08-02 18:14:21 UTC. --
	Aug 02 18:14:00 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:00Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 02 18:14:05 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:05Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 02 18:14:06 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:06Z" level=error msg="ContainerStats resp: {0x40007f3a80 linux}"
	Aug 02 18:14:06 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:06Z" level=error msg="ContainerStats resp: {0x40008bb380 linux}"
	Aug 02 18:14:07 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:07Z" level=error msg="ContainerStats resp: {0x40008faa00 linux}"
	Aug 02 18:14:08 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:08Z" level=error msg="ContainerStats resp: {0x4000957900 linux}"
	Aug 02 18:14:08 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:08Z" level=error msg="ContainerStats resp: {0x4000957ac0 linux}"
	Aug 02 18:14:08 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:08Z" level=error msg="ContainerStats resp: {0x40008fba00 linux}"
	Aug 02 18:14:08 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:08Z" level=error msg="ContainerStats resp: {0x40004f1080 linux}"
	Aug 02 18:14:08 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:08Z" level=error msg="ContainerStats resp: {0x40004f1ac0 linux}"
	Aug 02 18:14:08 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:08Z" level=error msg="ContainerStats resp: {0x4000358240 linux}"
	Aug 02 18:14:08 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:08Z" level=error msg="ContainerStats resp: {0x40003584c0 linux}"
	Aug 02 18:14:10 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:10Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 02 18:14:15 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:15Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 02 18:14:18 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:18Z" level=error msg="ContainerStats resp: {0x4000943740 linux}"
	Aug 02 18:14:18 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:18Z" level=error msg="ContainerStats resp: {0x400086eb80 linux}"
	Aug 02 18:14:19 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:19Z" level=error msg="ContainerStats resp: {0x40004f1e40 linux}"
	Aug 02 18:14:20 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:20Z" level=error msg="ContainerStats resp: {0x4000358400 linux}"
	Aug 02 18:14:20 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:20Z" level=error msg="ContainerStats resp: {0x4000495900 linux}"
	Aug 02 18:14:20 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:20Z" level=error msg="ContainerStats resp: {0x4000358040 linux}"
	Aug 02 18:14:20 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:20Z" level=error msg="ContainerStats resp: {0x4000494540 linux}"
	Aug 02 18:14:20 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:20Z" level=error msg="ContainerStats resp: {0x4000494980 linux}"
	Aug 02 18:14:20 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:20Z" level=error msg="ContainerStats resp: {0x4000358f00 linux}"
	Aug 02 18:14:20 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:20Z" level=error msg="ContainerStats resp: {0x4000359300 linux}"
	Aug 02 18:14:20 running-upgrade-894000 cri-dockerd[3120]: time="2024-08-02T18:14:20Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	294ca712bac3e       edaa71f2aee88       26 seconds ago      Running             coredns                   2                   ed2c36d3e31ab
	333afebe24864       edaa71f2aee88       26 seconds ago      Running             coredns                   2                   b0b426398c458
	2ef39923a6806       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   b0b426398c458
	40a7e5e7fb55d       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   ed2c36d3e31ab
	a20bb040c8f3f       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   c056fe89c089b
	18e302f2e6dec       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   13696b8cb82ea
	bf1e759796bde       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   c630b52e1991c
	46d35b03bce71       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   bd54db0b66238
	bd9cc7f29d3bc       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   898142ce2edda
	7cd8d7696cfa0       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   9010bd5d2e256
	
	
	==> coredns [294ca712bac3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2834173949578080572.7410358173658412369. HINFO: read udp 10.244.0.3:36728->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2834173949578080572.7410358173658412369. HINFO: read udp 10.244.0.3:48844->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2834173949578080572.7410358173658412369. HINFO: read udp 10.244.0.3:48337->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2834173949578080572.7410358173658412369. HINFO: read udp 10.244.0.3:44255->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2834173949578080572.7410358173658412369. HINFO: read udp 10.244.0.3:34014->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2834173949578080572.7410358173658412369. HINFO: read udp 10.244.0.3:46520->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2834173949578080572.7410358173658412369. HINFO: read udp 10.244.0.3:54551->10.0.2.3:53: i/o timeout
	
	
	==> coredns [2ef39923a680] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5745877979423814038.1315183724372705109. HINFO: read udp 10.244.0.2:35624->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5745877979423814038.1315183724372705109. HINFO: read udp 10.244.0.2:37024->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5745877979423814038.1315183724372705109. HINFO: read udp 10.244.0.2:40144->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5745877979423814038.1315183724372705109. HINFO: read udp 10.244.0.2:56285->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5745877979423814038.1315183724372705109. HINFO: read udp 10.244.0.2:51023->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5745877979423814038.1315183724372705109. HINFO: read udp 10.244.0.2:36878->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5745877979423814038.1315183724372705109. HINFO: read udp 10.244.0.2:36131->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5745877979423814038.1315183724372705109. HINFO: read udp 10.244.0.2:55583->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5745877979423814038.1315183724372705109. HINFO: read udp 10.244.0.2:46976->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5745877979423814038.1315183724372705109. HINFO: read udp 10.244.0.2:34744->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [333afebe2486] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1366210153705619561.4054605093682970181. HINFO: read udp 10.244.0.2:39127->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1366210153705619561.4054605093682970181. HINFO: read udp 10.244.0.2:47278->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1366210153705619561.4054605093682970181. HINFO: read udp 10.244.0.2:57546->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1366210153705619561.4054605093682970181. HINFO: read udp 10.244.0.2:38411->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1366210153705619561.4054605093682970181. HINFO: read udp 10.244.0.2:48169->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1366210153705619561.4054605093682970181. HINFO: read udp 10.244.0.2:55715->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1366210153705619561.4054605093682970181. HINFO: read udp 10.244.0.2:35730->10.0.2.3:53: i/o timeout
	
	
	==> coredns [40a7e5e7fb55] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9216613449973569148.7191127423449215079. HINFO: read udp 10.244.0.3:44395->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9216613449973569148.7191127423449215079. HINFO: read udp 10.244.0.3:51908->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9216613449973569148.7191127423449215079. HINFO: read udp 10.244.0.3:47298->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9216613449973569148.7191127423449215079. HINFO: read udp 10.244.0.3:45853->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9216613449973569148.7191127423449215079. HINFO: read udp 10.244.0.3:42348->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9216613449973569148.7191127423449215079. HINFO: read udp 10.244.0.3:51527->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9216613449973569148.7191127423449215079. HINFO: read udp 10.244.0.3:59539->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9216613449973569148.7191127423449215079. HINFO: read udp 10.244.0.3:56882->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9216613449973569148.7191127423449215079. HINFO: read udp 10.244.0.3:53027->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9216613449973569148.7191127423449215079. HINFO: read udp 10.244.0.3:44164->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-894000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-894000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=running-upgrade-894000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T11_10_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 18:10:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-894000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 18:14:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 18:10:04 +0000   Fri, 02 Aug 2024 18:10:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 18:10:04 +0000   Fri, 02 Aug 2024 18:10:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 18:10:04 +0000   Fri, 02 Aug 2024 18:10:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 18:10:04 +0000   Fri, 02 Aug 2024 18:10:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-894000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 92350aeddf414209ad3362a65a2687f2
	  System UUID:                92350aeddf414209ad3362a65a2687f2
	  Boot ID:                    fee34821-9e1e-4ea5-8d8b-db5436ce057f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-st7g7                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-v8zsc                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-894000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-894000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-894000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-qxgtp                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-894000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-894000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-894000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-894000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-894000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-894000 event: Registered Node running-upgrade-894000 in Controller
	
	
	==> dmesg <==
	[  +1.927518] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.081618] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.074205] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.143003] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.086782] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.086680] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +3.161473] systemd-fstab-generator[1291]: Ignoring "noauto" for root device
	[ +24.151758] systemd-fstab-generator[1996]: Ignoring "noauto" for root device
	[  +2.610660] systemd-fstab-generator[2278]: Ignoring "noauto" for root device
	[  +0.150125] systemd-fstab-generator[2312]: Ignoring "noauto" for root device
	[  +0.084912] systemd-fstab-generator[2323]: Ignoring "noauto" for root device
	[  +0.099119] systemd-fstab-generator[2336]: Ignoring "noauto" for root device
	[  +2.638361] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.214008] systemd-fstab-generator[3076]: Ignoring "noauto" for root device
	[  +0.084087] systemd-fstab-generator[3088]: Ignoring "noauto" for root device
	[  +0.080013] systemd-fstab-generator[3099]: Ignoring "noauto" for root device
	[  +0.094349] systemd-fstab-generator[3113]: Ignoring "noauto" for root device
	[  +2.360751] systemd-fstab-generator[3267]: Ignoring "noauto" for root device
	[  +5.195887] systemd-fstab-generator[4011]: Ignoring "noauto" for root device
	[  +1.880749] systemd-fstab-generator[4496]: Ignoring "noauto" for root device
	[Aug 2 18:06] kauditd_printk_skb: 68 callbacks suppressed
	[Aug 2 18:09] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.617046] systemd-fstab-generator[12502]: Ignoring "noauto" for root device
	[Aug 2 18:10] systemd-fstab-generator[13112]: Ignoring "noauto" for root device
	[  +0.463550] systemd-fstab-generator[13247]: Ignoring "noauto" for root device
	
	
	==> etcd [bd9cc7f29d3b] <==
	{"level":"info","ts":"2024-08-02T18:09:59.947Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-02T18:09:59.947Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-08-02T18:09:59.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-02T18:09:59.947Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-02T18:09:59.948Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-02T18:09:59.948Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-02T18:09:59.948Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-02T18:10:00.513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-02T18:10:00.513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-02T18:10:00.513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-02T18:10:00.513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-02T18:10:00.513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-02T18:10:00.513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-02T18:10:00.513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-02T18:10:00.513Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-894000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-02T18:10:00.513Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-02T18:10:00.513Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T18:10:00.514Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-02T18:10:00.514Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T18:10:00.515Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-02T18:10:00.515Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-02T18:10:00.515Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-02T18:10:00.518Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-02T18:10:00.518Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-02T18:10:00.518Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 18:14:21 up 9 min,  0 users,  load average: 0.43, 0.40, 0.20
	Linux running-upgrade-894000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [7cd8d7696cfa] <==
	I0802 18:10:01.802129       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0802 18:10:01.823876       1 cache.go:39] Caches are synced for autoregister controller
	I0802 18:10:01.823877       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0802 18:10:01.823884       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0802 18:10:01.848820       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0802 18:10:01.851299       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0802 18:10:01.851316       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0802 18:10:02.560332       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0802 18:10:02.746611       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0802 18:10:02.761816       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0802 18:10:02.761968       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0802 18:10:02.888473       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0802 18:10:02.900675       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0802 18:10:02.986641       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0802 18:10:02.989056       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0802 18:10:02.989466       1 controller.go:611] quota admission added evaluator for: endpoints
	I0802 18:10:02.990932       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0802 18:10:03.874310       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0802 18:10:04.437708       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0802 18:10:04.440599       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0802 18:10:04.454164       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0802 18:10:04.489238       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 18:10:17.013615       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0802 18:10:17.724712       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0802 18:10:18.201851       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [46d35b03bce7] <==
	I0802 18:10:17.033863       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-st7g7"
	I0802 18:10:17.036684       1 shared_informer.go:262] Caches are synced for crt configmap
	I0802 18:10:17.038133       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0802 18:10:17.040237       1 range_allocator.go:374] Set node running-upgrade-894000 PodCIDR to [10.244.0.0/24]
	I0802 18:10:17.041339       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0802 18:10:17.043763       1 shared_informer.go:262] Caches are synced for persistent volume
	I0802 18:10:17.044997       1 shared_informer.go:262] Caches are synced for expand
	I0802 18:10:17.047179       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0802 18:10:17.048794       1 shared_informer.go:262] Caches are synced for TTL
	I0802 18:10:17.069678       1 shared_informer.go:262] Caches are synced for GC
	I0802 18:10:17.069799       1 shared_informer.go:262] Caches are synced for disruption
	I0802 18:10:17.069820       1 disruption.go:371] Sending events to api server.
	I0802 18:10:17.082307       1 shared_informer.go:262] Caches are synced for HPA
	I0802 18:10:17.086306       1 shared_informer.go:262] Caches are synced for attach detach
	I0802 18:10:17.119974       1 shared_informer.go:262] Caches are synced for endpoint
	I0802 18:10:17.170232       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0802 18:10:17.188656       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0802 18:10:17.207542       1 shared_informer.go:262] Caches are synced for daemon sets
	I0802 18:10:17.251077       1 shared_informer.go:262] Caches are synced for resource quota
	I0802 18:10:17.271926       1 shared_informer.go:262] Caches are synced for stateful set
	I0802 18:10:17.301354       1 shared_informer.go:262] Caches are synced for resource quota
	I0802 18:10:17.664854       1 shared_informer.go:262] Caches are synced for garbage collector
	I0802 18:10:17.669960       1 shared_informer.go:262] Caches are synced for garbage collector
	I0802 18:10:17.669972       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0802 18:10:17.727565       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qxgtp"
	
	
	==> kube-proxy [a20bb040c8f3] <==
	I0802 18:10:18.186480       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0802 18:10:18.186503       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0802 18:10:18.186512       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0802 18:10:18.199590       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0802 18:10:18.199601       1 server_others.go:206] "Using iptables Proxier"
	I0802 18:10:18.199615       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 18:10:18.199701       1 server.go:661] "Version info" version="v1.24.1"
	I0802 18:10:18.199710       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:10:18.199978       1 config.go:317] "Starting service config controller"
	I0802 18:10:18.199989       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0802 18:10:18.200000       1 config.go:226] "Starting endpoint slice config controller"
	I0802 18:10:18.200005       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0802 18:10:18.200245       1 config.go:444] "Starting node config controller"
	I0802 18:10:18.200323       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0802 18:10:18.300115       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0802 18:10:18.300120       1 shared_informer.go:262] Caches are synced for service config
	I0802 18:10:18.300366       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [bf1e759796bd] <==
	W0802 18:10:01.785819       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0802 18:10:01.785827       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0802 18:10:01.785847       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0802 18:10:01.785852       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0802 18:10:01.785889       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 18:10:01.786158       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 18:10:01.785921       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0802 18:10:01.786163       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0802 18:10:01.786142       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0802 18:10:01.786168       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0802 18:10:01.786218       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 18:10:01.786228       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0802 18:10:02.674116       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 18:10:02.674210       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 18:10:02.725603       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0802 18:10:02.726048       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0802 18:10:02.770855       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0802 18:10:02.771021       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0802 18:10:02.786944       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0802 18:10:02.787030       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0802 18:10:02.795723       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0802 18:10:02.795778       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0802 18:10:02.802572       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0802 18:10:02.802585       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0802 18:10:03.383286       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-08-02 18:04:56 UTC, ends at Fri 2024-08-02 18:14:21 UTC. --
	Aug 02 18:10:04 running-upgrade-894000 kubelet[13118]: I0802 18:10:04.788296   13118 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9f8d81d1081845588ccda67e9507926a-flexvolume-dir\") pod \"kube-controller-manager-running-upgrade-894000\" (UID: \"9f8d81d1081845588ccda67e9507926a\") " pod="kube-system/kube-controller-manager-running-upgrade-894000"
	Aug 02 18:10:04 running-upgrade-894000 kubelet[13118]: I0802 18:10:04.788310   13118 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f8d81d1081845588ccda67e9507926a-k8s-certs\") pod \"kube-controller-manager-running-upgrade-894000\" (UID: \"9f8d81d1081845588ccda67e9507926a\") " pod="kube-system/kube-controller-manager-running-upgrade-894000"
	Aug 02 18:10:05 running-upgrade-894000 kubelet[13118]: I0802 18:10:05.471370   13118 apiserver.go:52] "Watching apiserver"
	Aug 02 18:10:05 running-upgrade-894000 kubelet[13118]: I0802 18:10:05.701888   13118 reconciler.go:157] "Reconciler: start to sync state"
	Aug 02 18:10:06 running-upgrade-894000 kubelet[13118]: E0802 18:10:06.073341   13118 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-894000\" already exists" pod="kube-system/etcd-running-upgrade-894000"
	Aug 02 18:10:06 running-upgrade-894000 kubelet[13118]: E0802 18:10:06.273207   13118 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-894000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-894000"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.029578   13118 topology_manager.go:200] "Topology Admit Handler"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.037717   13118 topology_manager.go:200] "Topology Admit Handler"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.039125   13118 topology_manager.go:200] "Topology Admit Handler"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.088940   13118 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.089260   13118 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.189877   13118 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc2g5\" (UniqueName: \"kubernetes.io/projected/f821d00e-45fd-441a-b35d-ed339e235150-kube-api-access-rc2g5\") pod \"coredns-6d4b75cb6d-st7g7\" (UID: \"f821d00e-45fd-441a-b35d-ed339e235150\") " pod="kube-system/coredns-6d4b75cb6d-st7g7"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.189910   13118 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rslh2\" (UniqueName: \"kubernetes.io/projected/28efc9df-ce17-4253-836a-5e34e37bc3f3-kube-api-access-rslh2\") pod \"storage-provisioner\" (UID: \"28efc9df-ce17-4253-836a-5e34e37bc3f3\") " pod="kube-system/storage-provisioner"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.189927   13118 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f821d00e-45fd-441a-b35d-ed339e235150-config-volume\") pod \"coredns-6d4b75cb6d-st7g7\" (UID: \"f821d00e-45fd-441a-b35d-ed339e235150\") " pod="kube-system/coredns-6d4b75cb6d-st7g7"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.189940   13118 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83515602-1b7a-4b0e-859e-75b933c20472-config-volume\") pod \"coredns-6d4b75cb6d-v8zsc\" (UID: \"83515602-1b7a-4b0e-859e-75b933c20472\") " pod="kube-system/coredns-6d4b75cb6d-v8zsc"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.189955   13118 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cpcg\" (UniqueName: \"kubernetes.io/projected/83515602-1b7a-4b0e-859e-75b933c20472-kube-api-access-4cpcg\") pod \"coredns-6d4b75cb6d-v8zsc\" (UID: \"83515602-1b7a-4b0e-859e-75b933c20472\") " pod="kube-system/coredns-6d4b75cb6d-v8zsc"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.189969   13118 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/28efc9df-ce17-4253-836a-5e34e37bc3f3-tmp\") pod \"storage-provisioner\" (UID: \"28efc9df-ce17-4253-836a-5e34e37bc3f3\") " pod="kube-system/storage-provisioner"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.646051   13118 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="ed2c36d3e31abd45ac421115f7370d1f246b82a926e4a381370b57a05ef91d5e"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.731500   13118 topology_manager.go:200] "Topology Admit Handler"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.894101   13118 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9e73172-a48f-43d6-b0b0-e04d4e5f63bc-lib-modules\") pod \"kube-proxy-qxgtp\" (UID: \"b9e73172-a48f-43d6-b0b0-e04d4e5f63bc\") " pod="kube-system/kube-proxy-qxgtp"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.894163   13118 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc7bz\" (UniqueName: \"kubernetes.io/projected/b9e73172-a48f-43d6-b0b0-e04d4e5f63bc-kube-api-access-bc7bz\") pod \"kube-proxy-qxgtp\" (UID: \"b9e73172-a48f-43d6-b0b0-e04d4e5f63bc\") " pod="kube-system/kube-proxy-qxgtp"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.894175   13118 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9e73172-a48f-43d6-b0b0-e04d4e5f63bc-xtables-lock\") pod \"kube-proxy-qxgtp\" (UID: \"b9e73172-a48f-43d6-b0b0-e04d4e5f63bc\") " pod="kube-system/kube-proxy-qxgtp"
	Aug 02 18:10:17 running-upgrade-894000 kubelet[13118]: I0802 18:10:17.894185   13118 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b9e73172-a48f-43d6-b0b0-e04d4e5f63bc-kube-proxy\") pod \"kube-proxy-qxgtp\" (UID: \"b9e73172-a48f-43d6-b0b0-e04d4e5f63bc\") " pod="kube-system/kube-proxy-qxgtp"
	Aug 02 18:13:55 running-upgrade-894000 kubelet[13118]: I0802 18:13:55.962924   13118 scope.go:110] "RemoveContainer" containerID="e2699333b63539b566484f640b8b0ba01bc577ecba60175f1e7f032fd9ca7bdb"
	Aug 02 18:13:55 running-upgrade-894000 kubelet[13118]: I0802 18:13:55.974668   13118 scope.go:110] "RemoveContainer" containerID="1fbb8e62e1659df16e8c1a20aa51fc4af71679008c8a24292e9f34054b141bd0"
	
	
	==> storage-provisioner [18e302f2e6de] <==
	I0802 18:10:17.657207       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 18:10:17.666202       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 18:10:17.667028       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 18:10:17.675130       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 18:10:17.678207       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7d0a789b-2f06-40d5-8c60-0fa4df3cc204", APIVersion:"v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-894000_c84b52ca-83fd-4ae5-b677-c166a8c37c46 became leader
	I0802 18:10:17.679120       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-894000_c84b52ca-83fd-4ae5-b677-c166a8c37c46!
	I0802 18:10:17.780947       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-894000_c84b52ca-83fd-4ae5-b677-c166a8c37c46!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-894000 -n running-upgrade-894000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-894000 -n running-upgrade-894000: exit status 2 (15.600139166s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-894000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-894000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-894000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-894000: (1.167471s)
--- FAIL: TestRunningBinaryUpgrade (610.45s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-226000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-226000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.894952417s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-226000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-226000" primary control-plane node in "kubernetes-upgrade-226000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-226000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:07:28.413773    4628 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:07:28.413921    4628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:07:28.413930    4628 out.go:304] Setting ErrFile to fd 2...
	I0802 11:07:28.413933    4628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:07:28.414065    4628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:07:28.415231    4628 out.go:298] Setting JSON to false
	I0802 11:07:28.432100    4628 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4012,"bootTime":1722618036,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:07:28.432174    4628 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:07:28.438261    4628 out.go:177] * [kubernetes-upgrade-226000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:07:28.446322    4628 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:07:28.446384    4628 notify.go:220] Checking for updates...
	I0802 11:07:28.454338    4628 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:07:28.457342    4628 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:07:28.460290    4628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:07:28.463303    4628 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:07:28.466236    4628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:07:28.469643    4628 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:07:28.469703    4628 config.go:182] Loaded profile config "running-upgrade-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:07:28.469751    4628 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:07:28.474282    4628 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:07:28.481322    4628 start.go:297] selected driver: qemu2
	I0802 11:07:28.481329    4628 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:07:28.481338    4628 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:07:28.483587    4628 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:07:28.486285    4628 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:07:28.489338    4628 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0802 11:07:28.489381    4628 cni.go:84] Creating CNI manager for ""
	I0802 11:07:28.489388    4628 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0802 11:07:28.489420    4628 start.go:340] cluster config:
	{Name:kubernetes-upgrade-226000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-226000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:07:28.492896    4628 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:07:28.500345    4628 out.go:177] * Starting "kubernetes-upgrade-226000" primary control-plane node in "kubernetes-upgrade-226000" cluster
	I0802 11:07:28.504319    4628 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0802 11:07:28.504335    4628 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0802 11:07:28.504349    4628 cache.go:56] Caching tarball of preloaded images
	I0802 11:07:28.504414    4628 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:07:28.504419    4628 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0802 11:07:28.504478    4628 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/kubernetes-upgrade-226000/config.json ...
	I0802 11:07:28.504488    4628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/kubernetes-upgrade-226000/config.json: {Name:mk4de60d4e4cb9d3ce0cca1b9fee4ca1c9f4ca44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:07:28.504819    4628 start.go:360] acquireMachinesLock for kubernetes-upgrade-226000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:07:28.504854    4628 start.go:364] duration metric: took 25.541µs to acquireMachinesLock for "kubernetes-upgrade-226000"
	I0802 11:07:28.504865    4628 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-226000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-226000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:07:28.504901    4628 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:07:28.513245    4628 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 11:07:28.528981    4628 start.go:159] libmachine.API.Create for "kubernetes-upgrade-226000" (driver="qemu2")
	I0802 11:07:28.529015    4628 client.go:168] LocalClient.Create starting
	I0802 11:07:28.529089    4628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:07:28.529125    4628 main.go:141] libmachine: Decoding PEM data...
	I0802 11:07:28.529135    4628 main.go:141] libmachine: Parsing certificate...
	I0802 11:07:28.529166    4628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:07:28.529189    4628 main.go:141] libmachine: Decoding PEM data...
	I0802 11:07:28.529196    4628 main.go:141] libmachine: Parsing certificate...
	I0802 11:07:28.529643    4628 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:07:28.684122    4628 main.go:141] libmachine: Creating SSH key...
	I0802 11:07:28.778334    4628 main.go:141] libmachine: Creating Disk image...
	I0802 11:07:28.778340    4628 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:07:28.778510    4628 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/disk.qcow2
	I0802 11:07:28.787764    4628 main.go:141] libmachine: STDOUT: 
	I0802 11:07:28.787782    4628 main.go:141] libmachine: STDERR: 
	I0802 11:07:28.787834    4628 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/disk.qcow2 +20000M
	I0802 11:07:28.795902    4628 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:07:28.795916    4628 main.go:141] libmachine: STDERR: 
	I0802 11:07:28.795935    4628 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/disk.qcow2
	I0802 11:07:28.795943    4628 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:07:28.795955    4628 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:07:28.795978    4628 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:a3:f5:47:c0:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/disk.qcow2
	I0802 11:07:28.797531    4628 main.go:141] libmachine: STDOUT: 
	I0802 11:07:28.797544    4628 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:07:28.797567    4628 client.go:171] duration metric: took 268.557375ms to LocalClient.Create
	I0802 11:07:30.799715    4628 start.go:128] duration metric: took 2.2948615s to createHost
	I0802 11:07:30.799814    4628 start.go:83] releasing machines lock for "kubernetes-upgrade-226000", held for 2.295031084s
	W0802 11:07:30.799884    4628 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:07:30.811150    4628 out.go:177] * Deleting "kubernetes-upgrade-226000" in qemu2 ...
	W0802 11:07:30.839164    4628 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:07:30.839197    4628 start.go:729] Will try again in 5 seconds ...
	I0802 11:07:35.841309    4628 start.go:360] acquireMachinesLock for kubernetes-upgrade-226000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:07:35.841745    4628 start.go:364] duration metric: took 348.125µs to acquireMachinesLock for "kubernetes-upgrade-226000"
	I0802 11:07:35.841873    4628 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-226000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-226000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:07:35.842135    4628 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:07:35.851624    4628 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 11:07:35.892591    4628 start.go:159] libmachine.API.Create for "kubernetes-upgrade-226000" (driver="qemu2")
	I0802 11:07:35.892647    4628 client.go:168] LocalClient.Create starting
	I0802 11:07:35.892762    4628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:07:35.892820    4628 main.go:141] libmachine: Decoding PEM data...
	I0802 11:07:35.892838    4628 main.go:141] libmachine: Parsing certificate...
	I0802 11:07:35.892895    4628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:07:35.892934    4628 main.go:141] libmachine: Decoding PEM data...
	I0802 11:07:35.892949    4628 main.go:141] libmachine: Parsing certificate...
	I0802 11:07:35.893671    4628 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:07:36.051723    4628 main.go:141] libmachine: Creating SSH key...
	I0802 11:07:36.210290    4628 main.go:141] libmachine: Creating Disk image...
	I0802 11:07:36.210297    4628 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:07:36.210501    4628 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/disk.qcow2
	I0802 11:07:36.219946    4628 main.go:141] libmachine: STDOUT: 
	I0802 11:07:36.219964    4628 main.go:141] libmachine: STDERR: 
	I0802 11:07:36.220017    4628 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/disk.qcow2 +20000M
	I0802 11:07:36.228367    4628 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:07:36.228381    4628 main.go:141] libmachine: STDERR: 
	I0802 11:07:36.228391    4628 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/disk.qcow2
	I0802 11:07:36.228397    4628 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:07:36.228413    4628 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:07:36.228435    4628 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:19:d4:54:46:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/disk.qcow2
	I0802 11:07:36.230071    4628 main.go:141] libmachine: STDOUT: 
	I0802 11:07:36.230089    4628 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:07:36.230101    4628 client.go:171] duration metric: took 337.459292ms to LocalClient.Create
	I0802 11:07:38.232243    4628 start.go:128] duration metric: took 2.390147542s to createHost
	I0802 11:07:38.232346    4628 start.go:83] releasing machines lock for "kubernetes-upgrade-226000", held for 2.390667667s
	W0802 11:07:38.232781    4628 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-226000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-226000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:07:38.247441    4628 out.go:177] 
	W0802 11:07:38.251486    4628 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:07:38.251514    4628 out.go:239] * 
	* 
	W0802 11:07:38.254270    4628 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:07:38.265325    4628 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-226000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-226000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-226000: (3.397846625s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-226000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-226000 status --format={{.Host}}: exit status 7 (56.652958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-226000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-226000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.179194667s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-226000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-226000" primary control-plane node in "kubernetes-upgrade-226000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-226000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-226000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:07:41.765269    4664 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:07:41.765408    4664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:07:41.765412    4664 out.go:304] Setting ErrFile to fd 2...
	I0802 11:07:41.765414    4664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:07:41.765563    4664 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:07:41.766607    4664 out.go:298] Setting JSON to false
	I0802 11:07:41.783300    4664 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4025,"bootTime":1722618036,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:07:41.783365    4664 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:07:41.787899    4664 out.go:177] * [kubernetes-upgrade-226000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:07:41.793839    4664 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:07:41.793859    4664 notify.go:220] Checking for updates...
	I0802 11:07:41.800799    4664 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:07:41.803828    4664 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:07:41.806794    4664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:07:41.809837    4664 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:07:41.812847    4664 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:07:41.816129    4664 config.go:182] Loaded profile config "kubernetes-upgrade-226000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0802 11:07:41.816391    4664 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:07:41.820822    4664 out.go:177] * Using the qemu2 driver based on existing profile
	I0802 11:07:41.827754    4664 start.go:297] selected driver: qemu2
	I0802 11:07:41.827759    4664 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-226000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-226000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:07:41.827804    4664 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:07:41.830264    4664 cni.go:84] Creating CNI manager for ""
	I0802 11:07:41.830279    4664 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:07:41.830303    4664 start.go:340] cluster config:
	{Name:kubernetes-upgrade-226000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-226000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:07:41.834019    4664 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:07:41.839795    4664 out.go:177] * Starting "kubernetes-upgrade-226000" primary control-plane node in "kubernetes-upgrade-226000" cluster
	I0802 11:07:41.843808    4664 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0802 11:07:41.843822    4664 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0802 11:07:41.843834    4664 cache.go:56] Caching tarball of preloaded images
	I0802 11:07:41.843905    4664 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:07:41.843911    4664 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0802 11:07:41.843968    4664 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/kubernetes-upgrade-226000/config.json ...
	I0802 11:07:41.844329    4664 start.go:360] acquireMachinesLock for kubernetes-upgrade-226000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:07:41.844356    4664 start.go:364] duration metric: took 21.583µs to acquireMachinesLock for "kubernetes-upgrade-226000"
	I0802 11:07:41.844364    4664 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:07:41.844368    4664 fix.go:54] fixHost starting: 
	I0802 11:07:41.844481    4664 fix.go:112] recreateIfNeeded on kubernetes-upgrade-226000: state=Stopped err=<nil>
	W0802 11:07:41.844489    4664 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:07:41.852825    4664 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-226000" ...
	I0802 11:07:41.856642    4664 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:07:41.856676    4664 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:19:d4:54:46:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/disk.qcow2
	I0802 11:07:41.858561    4664 main.go:141] libmachine: STDOUT: 
	I0802 11:07:41.858582    4664 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:07:41.858609    4664 fix.go:56] duration metric: took 14.241625ms for fixHost
	I0802 11:07:41.858614    4664 start.go:83] releasing machines lock for "kubernetes-upgrade-226000", held for 14.254958ms
	W0802 11:07:41.858622    4664 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:07:41.858655    4664 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:07:41.858659    4664 start.go:729] Will try again in 5 seconds ...
	I0802 11:07:46.859835    4664 start.go:360] acquireMachinesLock for kubernetes-upgrade-226000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:07:46.860321    4664 start.go:364] duration metric: took 394.667µs to acquireMachinesLock for "kubernetes-upgrade-226000"
	I0802 11:07:46.860449    4664 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:07:46.860462    4664 fix.go:54] fixHost starting: 
	I0802 11:07:46.861079    4664 fix.go:112] recreateIfNeeded on kubernetes-upgrade-226000: state=Stopped err=<nil>
	W0802 11:07:46.861098    4664 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:07:46.868457    4664 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-226000" ...
	I0802 11:07:46.871336    4664 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:07:46.871527    4664 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:19:d4:54:46:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubernetes-upgrade-226000/disk.qcow2
	I0802 11:07:46.879656    4664 main.go:141] libmachine: STDOUT: 
	I0802 11:07:46.879712    4664 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:07:46.879783    4664 fix.go:56] duration metric: took 19.32125ms for fixHost
	I0802 11:07:46.879798    4664 start.go:83] releasing machines lock for "kubernetes-upgrade-226000", held for 19.407833ms
	W0802 11:07:46.879942    4664 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-226000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-226000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:07:46.888440    4664 out.go:177] 
	W0802 11:07:46.891452    4664 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:07:46.891476    4664 out.go:239] * 
	* 
	W0802 11:07:46.893461    4664 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:07:46.902280    4664 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-226000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-226000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-226000 version --output=json: exit status 1 (59.658208ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-226000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-02 11:07:46.976019 -0700 PDT m=+2538.494024917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-226000 -n kubernetes-upgrade-226000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-226000 -n kubernetes-upgrade-226000: exit status 7 (32.184042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-226000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-226000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-226000
--- FAIL: TestKubernetesUpgrade (18.70s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.09s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19355
- KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3617425883/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.09s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.4s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19355
- KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4260794049/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (572.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3540925567 start -p stopped-upgrade-387000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3540925567 start -p stopped-upgrade-387000 --memory=2200 --vm-driver=qemu2 : (39.7172415s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3540925567 -p stopped-upgrade-387000 stop
E0802 11:08:35.871321    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3540925567 -p stopped-upgrade-387000 stop: (12.126066292s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-387000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0802 11:10:32.796773    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
E0802 11:12:15.008756    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-387000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.009432917s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-387000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-387000" primary control-plane node in "stopped-upgrade-387000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-387000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:08:39.863396    4699 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:08:39.863596    4699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:08:39.863600    4699 out.go:304] Setting ErrFile to fd 2...
	I0802 11:08:39.863603    4699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:08:39.863784    4699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:08:39.864957    4699 out.go:298] Setting JSON to false
	I0802 11:08:39.883943    4699 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4083,"bootTime":1722618036,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:08:39.884012    4699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:08:39.888960    4699 out.go:177] * [stopped-upgrade-387000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:08:39.896959    4699 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:08:39.897017    4699 notify.go:220] Checking for updates...
	I0802 11:08:39.904910    4699 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:08:39.907978    4699 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:08:39.909426    4699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:08:39.916942    4699 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:08:39.920829    4699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:08:39.924193    4699 config.go:182] Loaded profile config "stopped-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:08:39.926952    4699 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0802 11:08:39.929961    4699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:08:39.933918    4699 out.go:177] * Using the qemu2 driver based on existing profile
	I0802 11:08:39.940892    4699 start.go:297] selected driver: qemu2
	I0802 11:08:39.940897    4699 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-387000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0802 11:08:39.940939    4699 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:08:39.943705    4699 cni.go:84] Creating CNI manager for ""
	I0802 11:08:39.943721    4699 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:08:39.943775    4699 start.go:340] cluster config:
	{Name:stopped-upgrade-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-387000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0802 11:08:39.943827    4699 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:08:39.950887    4699 out.go:177] * Starting "stopped-upgrade-387000" primary control-plane node in "stopped-upgrade-387000" cluster
	I0802 11:08:39.954895    4699 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0802 11:08:39.954910    4699 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0802 11:08:39.954919    4699 cache.go:56] Caching tarball of preloaded images
	I0802 11:08:39.955012    4699 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:08:39.955017    4699 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0802 11:08:39.955085    4699 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/config.json ...
	I0802 11:08:39.955405    4699 start.go:360] acquireMachinesLock for stopped-upgrade-387000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:08:39.955435    4699 start.go:364] duration metric: took 22.042µs to acquireMachinesLock for "stopped-upgrade-387000"
	I0802 11:08:39.955442    4699 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:08:39.955448    4699 fix.go:54] fixHost starting: 
	I0802 11:08:39.955556    4699 fix.go:112] recreateIfNeeded on stopped-upgrade-387000: state=Stopped err=<nil>
	W0802 11:08:39.955564    4699 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:08:39.963877    4699 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-387000" ...
	I0802 11:08:39.967905    4699 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:08:39.967986    4699 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50471-:22,hostfwd=tcp::50472-:2376,hostname=stopped-upgrade-387000 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/disk.qcow2
	I0802 11:08:40.015345    4699 main.go:141] libmachine: STDOUT: 
	I0802 11:08:40.015379    4699 main.go:141] libmachine: STDERR: 
	I0802 11:08:40.015385    4699 main.go:141] libmachine: Waiting for VM to start (ssh -p 50471 docker@127.0.0.1)...
	I0802 11:08:59.885497    4699 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/config.json ...
	I0802 11:08:59.886094    4699 machine.go:94] provisionDockerMachine start ...
	I0802 11:08:59.886230    4699 main.go:141] libmachine: Using SSH client type: native
	I0802 11:08:59.886661    4699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acea10] 0x102ad1270 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0802 11:08:59.886673    4699 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 11:08:59.961160    4699 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0802 11:08:59.961184    4699 buildroot.go:166] provisioning hostname "stopped-upgrade-387000"
	I0802 11:08:59.961256    4699 main.go:141] libmachine: Using SSH client type: native
	I0802 11:08:59.961400    4699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acea10] 0x102ad1270 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0802 11:08:59.961408    4699 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-387000 && echo "stopped-upgrade-387000" | sudo tee /etc/hostname
	I0802 11:09:00.021116    4699 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-387000
	
	I0802 11:09:00.021173    4699 main.go:141] libmachine: Using SSH client type: native
	I0802 11:09:00.021292    4699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acea10] 0x102ad1270 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0802 11:09:00.021301    4699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-387000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-387000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-387000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 11:09:00.079910    4699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 11:09:00.079926    4699 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19355-1243/.minikube CaCertPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19355-1243/.minikube}
	I0802 11:09:00.079936    4699 buildroot.go:174] setting up certificates
	I0802 11:09:00.079940    4699 provision.go:84] configureAuth start
	I0802 11:09:00.079949    4699 provision.go:143] copyHostCerts
	I0802 11:09:00.080017    4699 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.pem, removing ...
	I0802 11:09:00.080023    4699 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.pem
	I0802 11:09:00.080224    4699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.pem (1078 bytes)
	I0802 11:09:00.080417    4699 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-1243/.minikube/cert.pem, removing ...
	I0802 11:09:00.080420    4699 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-1243/.minikube/cert.pem
	I0802 11:09:00.080467    4699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19355-1243/.minikube/cert.pem (1123 bytes)
	I0802 11:09:00.080570    4699 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-1243/.minikube/key.pem, removing ...
	I0802 11:09:00.080573    4699 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-1243/.minikube/key.pem
	I0802 11:09:00.080618    4699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19355-1243/.minikube/key.pem (1675 bytes)
	I0802 11:09:00.080731    4699 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-387000 san=[127.0.0.1 localhost minikube stopped-upgrade-387000]
	I0802 11:09:00.185855    4699 provision.go:177] copyRemoteCerts
	I0802 11:09:00.185895    4699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 11:09:00.185903    4699 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/id_rsa Username:docker}
	I0802 11:09:00.218550    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 11:09:00.225130    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0802 11:09:00.232161    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 11:09:00.238991    4699 provision.go:87] duration metric: took 159.048292ms to configureAuth
	I0802 11:09:00.239000    4699 buildroot.go:189] setting minikube options for container-runtime
	I0802 11:09:00.239117    4699 config.go:182] Loaded profile config "stopped-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:09:00.239150    4699 main.go:141] libmachine: Using SSH client type: native
	I0802 11:09:00.239236    4699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acea10] 0x102ad1270 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0802 11:09:00.239241    4699 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0802 11:09:00.292556    4699 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0802 11:09:00.292565    4699 buildroot.go:70] root file system type: tmpfs
	I0802 11:09:00.292615    4699 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0802 11:09:00.292660    4699 main.go:141] libmachine: Using SSH client type: native
	I0802 11:09:00.292767    4699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acea10] 0x102ad1270 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0802 11:09:00.292800    4699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0802 11:09:00.350527    4699 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0802 11:09:00.350581    4699 main.go:141] libmachine: Using SSH client type: native
	I0802 11:09:00.350692    4699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acea10] 0x102ad1270 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0802 11:09:00.350701    4699 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0802 11:09:00.715955    4699 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0802 11:09:00.715967    4699 machine.go:97] duration metric: took 829.893833ms to provisionDockerMachine
	I0802 11:09:00.715974    4699 start.go:293] postStartSetup for "stopped-upgrade-387000" (driver="qemu2")
	I0802 11:09:00.715981    4699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 11:09:00.716044    4699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 11:09:00.716053    4699 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/id_rsa Username:docker}
	I0802 11:09:00.747103    4699 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 11:09:00.748325    4699 info.go:137] Remote host: Buildroot 2021.02.12
	I0802 11:09:00.748332    4699 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19355-1243/.minikube/addons for local assets ...
	I0802 11:09:00.748405    4699 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19355-1243/.minikube/files for local assets ...
	I0802 11:09:00.748519    4699 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19355-1243/.minikube/files/etc/ssl/certs/17472.pem -> 17472.pem in /etc/ssl/certs
	I0802 11:09:00.748617    4699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 11:09:00.751401    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/files/etc/ssl/certs/17472.pem --> /etc/ssl/certs/17472.pem (1708 bytes)
	I0802 11:09:00.758237    4699 start.go:296] duration metric: took 42.260292ms for postStartSetup
	I0802 11:09:00.758250    4699 fix.go:56] duration metric: took 20.803541167s for fixHost
	I0802 11:09:00.758282    4699 main.go:141] libmachine: Using SSH client type: native
	I0802 11:09:00.758403    4699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acea10] 0x102ad1270 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0802 11:09:00.758408    4699 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0802 11:09:00.810608    4699 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722622141.021780212
	
	I0802 11:09:00.810615    4699 fix.go:216] guest clock: 1722622141.021780212
	I0802 11:09:00.810623    4699 fix.go:229] Guest: 2024-08-02 11:09:01.021780212 -0700 PDT Remote: 2024-08-02 11:09:00.758251 -0700 PDT m=+20.924383001 (delta=263.529212ms)
	I0802 11:09:00.810633    4699 fix.go:200] guest clock delta is within tolerance: 263.529212ms
	I0802 11:09:00.810636    4699 start.go:83] releasing machines lock for "stopped-upgrade-387000", held for 20.855935958s
	I0802 11:09:00.810696    4699 ssh_runner.go:195] Run: cat /version.json
	I0802 11:09:00.810705    4699 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/id_rsa Username:docker}
	I0802 11:09:00.810696    4699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 11:09:00.810738    4699 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/id_rsa Username:docker}
	W0802 11:09:00.811246    4699 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50594->127.0.0.1:50471: write: broken pipe
	I0802 11:09:00.811262    4699 retry.go:31] will retry after 367.303703ms: ssh: handshake failed: write tcp 127.0.0.1:50594->127.0.0.1:50471: write: broken pipe
	W0802 11:09:01.229456    4699 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0802 11:09:01.229610    4699 ssh_runner.go:195] Run: systemctl --version
	I0802 11:09:01.233196    4699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 11:09:01.236753    4699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 11:09:01.236821    4699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0802 11:09:01.242213    4699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0802 11:09:01.253373    4699 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 11:09:01.253388    4699 start.go:495] detecting cgroup driver to use...
	I0802 11:09:01.253498    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 11:09:01.266116    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0802 11:09:01.269771    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0802 11:09:01.274747    4699 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0802 11:09:01.274818    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0802 11:09:01.278441    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0802 11:09:01.281527    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0802 11:09:01.284886    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0802 11:09:01.289223    4699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 11:09:01.292903    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0802 11:09:01.296127    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0802 11:09:01.299823    4699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0802 11:09:01.303267    4699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 11:09:01.305911    4699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 11:09:01.308772    4699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:09:01.379646    4699 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0802 11:09:01.387023    4699 start.go:495] detecting cgroup driver to use...
	I0802 11:09:01.387098    4699 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0802 11:09:01.392610    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 11:09:01.397940    4699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 11:09:01.404365    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 11:09:01.409044    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0802 11:09:01.413862    4699 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0802 11:09:01.455001    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0802 11:09:01.459927    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 11:09:01.465406    4699 ssh_runner.go:195] Run: which cri-dockerd
	I0802 11:09:01.466673    4699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0802 11:09:01.469187    4699 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0802 11:09:01.474414    4699 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0802 11:09:01.554951    4699 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0802 11:09:01.632077    4699 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0802 11:09:01.632154    4699 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0802 11:09:01.638124    4699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:09:01.719886    4699 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0802 11:09:02.882062    4699 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.162202334s)
	I0802 11:09:02.882127    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0802 11:09:02.886822    4699 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0802 11:09:02.892743    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0802 11:09:02.897622    4699 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0802 11:09:02.973078    4699 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0802 11:09:03.051779    4699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:09:03.131221    4699 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0802 11:09:03.138170    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0802 11:09:03.142768    4699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:09:03.219207    4699 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0802 11:09:03.265508    4699 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0802 11:09:03.265612    4699 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0802 11:09:03.267828    4699 start.go:563] Will wait 60s for crictl version
	I0802 11:09:03.267869    4699 ssh_runner.go:195] Run: which crictl
	I0802 11:09:03.269118    4699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 11:09:03.283871    4699 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0802 11:09:03.283932    4699 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0802 11:09:03.304666    4699 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0802 11:09:03.325383    4699 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0802 11:09:03.325449    4699 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0802 11:09:03.326683    4699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 11:09:03.330091    4699 kubeadm.go:883] updating cluster {Name:stopped-upgrade-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-387000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0802 11:09:03.330150    4699 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0802 11:09:03.330191    4699 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0802 11:09:03.340727    4699 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0802 11:09:03.340737    4699 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0802 11:09:03.340784    4699 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0802 11:09:03.344419    4699 ssh_runner.go:195] Run: which lz4
	I0802 11:09:03.345738    4699 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0802 11:09:03.346982    4699 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 11:09:03.346992    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0802 11:09:04.239717    4699 docker.go:649] duration metric: took 894.036916ms to copy over tarball
	I0802 11:09:04.239792    4699 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 11:09:05.396966    4699 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.157201042s)
	I0802 11:09:05.396980    4699 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 11:09:05.412963    4699 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0802 11:09:05.416370    4699 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0802 11:09:05.421303    4699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:09:05.499035    4699 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0802 11:09:07.070674    4699 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.571678958s)
	I0802 11:09:07.070768    4699 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0802 11:09:07.082269    4699 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0802 11:09:07.082281    4699 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0802 11:09:07.082287    4699 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0802 11:09:07.088749    4699 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:09:07.090616    4699 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0802 11:09:07.092380    4699 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0802 11:09:07.092469    4699 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:09:07.094584    4699 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0802 11:09:07.094760    4699 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0802 11:09:07.095911    4699 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0802 11:09:07.096356    4699 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0802 11:09:07.097263    4699 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0802 11:09:07.098232    4699 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0802 11:09:07.098269    4699 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0802 11:09:07.099472    4699 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0802 11:09:07.099499    4699 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:09:07.099532    4699 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0802 11:09:07.100318    4699 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0802 11:09:07.100927    4699 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:09:07.552872    4699 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0802 11:09:07.552872    4699 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0802 11:09:07.564954    4699 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0802 11:09:07.564954    4699 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0802 11:09:07.572858    4699 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0802 11:09:07.572901    4699 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0802 11:09:07.572959    4699 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0802 11:09:07.575544    4699 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0802 11:09:07.575564    4699 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0802 11:09:07.575606    4699 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0802 11:09:07.586036    4699 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0802 11:09:07.591937    4699 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0802 11:09:07.597840    4699 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0802 11:09:07.597861    4699 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0802 11:09:07.597912    4699 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0802 11:09:07.598526    4699 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0802 11:09:07.598673    4699 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0802 11:09:07.598682    4699 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0802 11:09:07.598708    4699 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0802 11:09:07.603276    4699 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0802 11:09:07.609126    4699 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0802 11:09:07.609257    4699 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:09:07.612360    4699 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0802 11:09:07.612377    4699 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0802 11:09:07.612431    4699 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0802 11:09:07.622048    4699 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0802 11:09:07.622072    4699 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0802 11:09:07.622128    4699 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0802 11:09:07.624089    4699 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0802 11:09:07.632192    4699 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0802 11:09:07.632207    4699 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0802 11:09:07.632212    4699 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:09:07.632261    4699 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0802 11:09:07.634205    4699 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0802 11:09:07.634313    4699 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0802 11:09:07.646219    4699 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0802 11:09:07.646239    4699 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0802 11:09:07.646270    4699 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0802 11:09:07.646284    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0802 11:09:07.646344    4699 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0802 11:09:07.648617    4699 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0802 11:09:07.648638    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0802 11:09:07.685782    4699 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0802 11:09:07.685799    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0802 11:09:07.712089    4699 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0802 11:09:07.712131    4699 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0802 11:09:07.712140    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0802 11:09:07.749214    4699 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0802 11:09:07.875231    4699 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0802 11:09:07.875344    4699 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:09:07.887689    4699 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0802 11:09:07.887716    4699 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:09:07.887775    4699 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:09:07.904460    4699 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0802 11:09:07.904578    4699 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0802 11:09:07.906046    4699 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0802 11:09:07.906059    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0802 11:09:07.932998    4699 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0802 11:09:07.933011    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0802 11:09:08.170542    4699 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0802 11:09:08.170582    4699 cache_images.go:92] duration metric: took 1.088328375s to LoadCachedImages
	W0802 11:09:08.170632    4699 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0802 11:09:08.170639    4699 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0802 11:09:08.170693    4699 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-387000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-387000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 11:09:08.170763    4699 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0802 11:09:08.186803    4699 cni.go:84] Creating CNI manager for ""
	I0802 11:09:08.186815    4699 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:09:08.186821    4699 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 11:09:08.186829    4699 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-387000 NodeName:stopped-upgrade-387000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 11:09:08.186902    4699 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-387000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 11:09:08.186961    4699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0802 11:09:08.189980    4699 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 11:09:08.190026    4699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 11:09:08.192717    4699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0802 11:09:08.197942    4699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 11:09:08.202676    4699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0802 11:09:08.207982    4699 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0802 11:09:08.209392    4699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 11:09:08.213010    4699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:09:08.287483    4699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 11:09:08.292517    4699 certs.go:68] Setting up /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000 for IP: 10.0.2.15
	I0802 11:09:08.292526    4699 certs.go:194] generating shared ca certs ...
	I0802 11:09:08.292534    4699 certs.go:226] acquiring lock for ca certs: {Name:mkac8babaf2bcf8bb25aa8e1753c51c03330d7ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:09:08.292697    4699 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.key
	I0802 11:09:08.292732    4699 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/proxy-client-ca.key
	I0802 11:09:08.292737    4699 certs.go:256] generating profile certs ...
	I0802 11:09:08.292804    4699 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/client.key
	I0802 11:09:08.292820    4699 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.key.684384e6
	I0802 11:09:08.292832    4699 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.crt.684384e6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0802 11:09:08.357945    4699 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.crt.684384e6 ...
	I0802 11:09:08.357959    4699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.crt.684384e6: {Name:mka86f54a14f32e9568dd2405cd0db2a37448308 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:09:08.358678    4699 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.key.684384e6 ...
	I0802 11:09:08.358684    4699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.key.684384e6: {Name:mk6dd8d61bfdc6521999136ed418d64b051deb1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:09:08.358860    4699 certs.go:381] copying /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.crt.684384e6 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.crt
	I0802 11:09:08.358986    4699 certs.go:385] copying /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.key.684384e6 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.key
	I0802 11:09:08.359131    4699 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/proxy-client.key
	I0802 11:09:08.359266    4699 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/1747.pem (1338 bytes)
	W0802 11:09:08.359292    4699 certs.go:480] ignoring /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/1747_empty.pem, impossibly tiny 0 bytes
	I0802 11:09:08.359297    4699 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 11:09:08.359317    4699 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem (1078 bytes)
	I0802 11:09:08.359335    4699 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem (1123 bytes)
	I0802 11:09:08.359358    4699 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/key.pem (1675 bytes)
	I0802 11:09:08.359400    4699 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-1243/.minikube/files/etc/ssl/certs/17472.pem (1708 bytes)
	I0802 11:09:08.359728    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 11:09:08.366909    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0802 11:09:08.373263    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 11:09:08.380472    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 11:09:08.387701    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0802 11:09:08.396718    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0802 11:09:08.404142    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 11:09:08.411653    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0802 11:09:08.418810    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/1747.pem --> /usr/share/ca-certificates/1747.pem (1338 bytes)
	I0802 11:09:08.425352    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/files/etc/ssl/certs/17472.pem --> /usr/share/ca-certificates/17472.pem (1708 bytes)
	I0802 11:09:08.432413    4699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-1243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 11:09:08.439304    4699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 11:09:08.444402    4699 ssh_runner.go:195] Run: openssl version
	I0802 11:09:08.446438    4699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17472.pem && ln -fs /usr/share/ca-certificates/17472.pem /etc/ssl/certs/17472.pem"
	I0802 11:09:08.449160    4699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17472.pem
	I0802 11:09:08.450513    4699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:35 /usr/share/ca-certificates/17472.pem
	I0802 11:09:08.450531    4699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17472.pem
	I0802 11:09:08.452286    4699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 11:09:08.455317    4699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 11:09:08.457984    4699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 11:09:08.459356    4699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:26 /usr/share/ca-certificates/minikubeCA.pem
	I0802 11:09:08.459373    4699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 11:09:08.461078    4699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 11:09:08.464255    4699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1747.pem && ln -fs /usr/share/ca-certificates/1747.pem /etc/ssl/certs/1747.pem"
	I0802 11:09:08.467353    4699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1747.pem
	I0802 11:09:08.468649    4699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:35 /usr/share/ca-certificates/1747.pem
	I0802 11:09:08.468665    4699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1747.pem
	I0802 11:09:08.470528    4699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1747.pem /etc/ssl/certs/51391683.0"
	I0802 11:09:08.473347    4699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 11:09:08.474732    4699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 11:09:08.476658    4699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 11:09:08.478351    4699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 11:09:08.480191    4699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 11:09:08.481988    4699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 11:09:08.483623    4699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 11:09:08.485320    4699 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-387000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0802 11:09:08.485382    4699 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0802 11:09:08.495337    4699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 11:09:08.498446    4699 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0802 11:09:08.498451    4699 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0802 11:09:08.498474    4699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0802 11:09:08.501072    4699 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0802 11:09:08.501377    4699 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-387000" does not appear in /Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:09:08.501485    4699 kubeconfig.go:62] /Users/jenkins/minikube-integration/19355-1243/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-387000" cluster setting kubeconfig missing "stopped-upgrade-387000" context setting]
	I0802 11:09:08.501690    4699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/kubeconfig: {Name:mkee875f598bd0a8f78c04f09a48257e74d5dd54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:09:08.502202    4699 kapi.go:59] client config for stopped-upgrade-387000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/client.key", CAFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103e641b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0802 11:09:08.502543    4699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0802 11:09:08.505131    4699 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-387000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0802 11:09:08.505138    4699 kubeadm.go:1160] stopping kube-system containers ...
	I0802 11:09:08.505178    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0802 11:09:08.515726    4699 docker.go:483] Stopping containers: [06f4cb7c5b7d 8d6ae6ac7f08 c62a1899d653 0237f334d11e 241be9c6963f beaa5f7a2b37 179baee8dbee 15f78d53f678]
	I0802 11:09:08.515790    4699 ssh_runner.go:195] Run: docker stop 06f4cb7c5b7d 8d6ae6ac7f08 c62a1899d653 0237f334d11e 241be9c6963f beaa5f7a2b37 179baee8dbee 15f78d53f678
	I0802 11:09:08.526602    4699 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0802 11:09:08.531980    4699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 11:09:08.534805    4699 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 11:09:08.534813    4699 kubeadm.go:157] found existing configuration files:
	
	I0802 11:09:08.534836    4699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf
	I0802 11:09:08.537186    4699 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 11:09:08.537203    4699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 11:09:08.540051    4699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf
	I0802 11:09:08.542743    4699 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 11:09:08.542765    4699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 11:09:08.545151    4699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf
	I0802 11:09:08.548003    4699 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 11:09:08.548024    4699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 11:09:08.550565    4699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf
	I0802 11:09:08.552799    4699 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 11:09:08.552818    4699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 11:09:08.555708    4699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 11:09:08.558530    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 11:09:08.582430    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 11:09:09.020832    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0802 11:09:09.153011    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 11:09:09.178895    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0802 11:09:09.202707    4699 api_server.go:52] waiting for apiserver process to appear ...
	I0802 11:09:09.202792    4699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 11:09:09.704814    4699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 11:09:10.204789    4699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 11:09:10.209102    4699 api_server.go:72] duration metric: took 1.006431875s to wait for apiserver process to appear ...
	I0802 11:09:10.209112    4699 api_server.go:88] waiting for apiserver healthz status ...
	I0802 11:09:10.209123    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:15.211087    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:15.211118    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:20.211168    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:20.211187    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:25.211305    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:25.211320    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:30.211524    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:30.211568    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:35.211963    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:35.212012    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:40.212719    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:40.212752    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:45.213569    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:45.213640    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:50.215012    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:50.215142    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:09:55.215875    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:09:55.215955    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:00.217627    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:00.217751    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:05.217954    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:05.217973    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:10.219956    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:10.220121    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:10:10.240034    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:10:10.240131    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:10:10.252370    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:10:10.252452    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:10:10.262951    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:10:10.263026    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:10:10.273503    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:10:10.273574    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:10:10.284493    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:10:10.284568    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:10:10.295228    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:10:10.295292    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:10:10.305164    4699 logs.go:276] 0 containers: []
	W0802 11:10:10.305176    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:10:10.305229    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:10:10.315910    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:10:10.315927    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:10:10.315933    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:10:10.327411    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:10:10.327422    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:10:10.340175    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:10:10.340187    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:10:10.352337    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:10:10.352351    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:10:10.356867    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:10:10.356873    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:10:10.371421    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:10:10.371432    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:10:10.386296    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:10:10.386306    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:10:10.405087    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:10:10.405099    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:10:10.418915    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:10:10.418926    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:10:10.438762    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:10:10.438773    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:10:10.454096    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:10:10.454109    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:10:10.570478    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:10:10.570490    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:10:10.583320    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:10:10.583331    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:10:10.607885    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:10:10.607895    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:10:10.644960    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:10:10.644968    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:10:10.658526    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:10:10.658540    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:10:10.670638    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:10:10.670650    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:10:13.213023    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:18.214233    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:18.214403    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:10:18.226437    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:10:18.226515    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:10:18.237522    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:10:18.237596    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:10:18.247842    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:10:18.247913    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:10:18.257842    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:10:18.257910    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:10:18.268823    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:10:18.268893    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:10:18.279666    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:10:18.279735    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:10:18.289681    4699 logs.go:276] 0 containers: []
	W0802 11:10:18.289692    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:10:18.289752    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:10:18.301107    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:10:18.301127    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:10:18.301133    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:10:18.312569    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:10:18.312583    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:10:18.324428    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:10:18.324439    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:10:18.364619    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:10:18.364629    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:10:18.378700    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:10:18.378710    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:10:18.390672    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:10:18.390687    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:10:18.408585    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:10:18.408596    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:10:18.420072    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:10:18.420083    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:10:18.437766    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:10:18.437777    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:10:18.449685    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:10:18.449697    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:10:18.495451    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:10:18.495469    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:10:18.511315    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:10:18.511327    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:10:18.528449    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:10:18.528459    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:10:18.540001    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:10:18.540012    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:10:18.566848    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:10:18.566865    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:10:18.571348    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:10:18.571355    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:10:18.607648    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:10:18.607659    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:10:21.128355    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:26.130659    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:26.130900    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:10:26.156174    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:10:26.156290    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:10:26.172937    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:10:26.173021    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:10:26.185873    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:10:26.185952    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:10:26.198603    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:10:26.198678    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:10:26.211328    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:10:26.211403    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:10:26.221680    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:10:26.221745    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:10:26.232142    4699 logs.go:276] 0 containers: []
	W0802 11:10:26.232152    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:10:26.232211    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:10:26.243206    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:10:26.243224    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:10:26.243231    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:10:26.256461    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:10:26.256471    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:10:26.267407    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:10:26.267419    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:10:26.279426    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:10:26.279442    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:10:26.294827    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:10:26.294840    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:10:26.310151    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:10:26.310161    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:10:26.349596    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:10:26.349605    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:10:26.364137    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:10:26.364151    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:10:26.402171    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:10:26.402183    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:10:26.413681    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:10:26.413693    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:10:26.418261    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:10:26.418268    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:10:26.433074    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:10:26.433086    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:10:26.472221    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:10:26.472232    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:10:26.483407    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:10:26.483419    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:10:26.508629    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:10:26.508637    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:10:26.522864    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:10:26.522874    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:10:26.539500    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:10:26.539511    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:10:29.052707    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:34.054936    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:34.055230    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:10:34.073687    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:10:34.073780    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:10:34.087232    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:10:34.087307    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:10:34.098721    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:10:34.098795    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:10:34.111974    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:10:34.112048    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:10:34.121910    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:10:34.121973    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:10:34.131960    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:10:34.132036    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:10:34.142103    4699 logs.go:276] 0 containers: []
	W0802 11:10:34.142115    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:10:34.142169    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:10:34.152971    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:10:34.152989    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:10:34.152994    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:10:34.164827    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:10:34.164836    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:10:34.189445    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:10:34.189454    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:10:34.205667    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:10:34.205678    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:10:34.245903    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:10:34.245913    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:10:34.282778    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:10:34.282793    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:10:34.297048    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:10:34.297060    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:10:34.315062    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:10:34.315073    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:10:34.329928    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:10:34.329941    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:10:34.341597    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:10:34.341608    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:10:34.381043    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:10:34.381053    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:10:34.385765    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:10:34.385772    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:10:34.399274    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:10:34.399286    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:10:34.410616    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:10:34.410631    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:10:34.421509    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:10:34.421522    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:10:34.433517    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:10:34.433529    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:10:34.450578    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:10:34.450591    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:10:36.968148    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:41.970444    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:41.970603    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:10:41.984287    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:10:41.984363    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:10:41.995931    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:10:41.996010    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:10:42.007763    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:10:42.007841    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:10:42.024164    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:10:42.024238    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:10:42.035927    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:10:42.035991    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:10:42.046511    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:10:42.046584    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:10:42.056428    4699 logs.go:276] 0 containers: []
	W0802 11:10:42.056441    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:10:42.056508    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:10:42.066809    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:10:42.066828    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:10:42.066833    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:10:42.080878    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:10:42.080894    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:10:42.092837    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:10:42.092846    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:10:42.108991    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:10:42.109001    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:10:42.121177    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:10:42.121189    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:10:42.125818    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:10:42.125824    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:10:42.145484    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:10:42.145495    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:10:42.159664    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:10:42.159673    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:10:42.170725    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:10:42.170737    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:10:42.208229    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:10:42.208240    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:10:42.244977    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:10:42.244988    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:10:42.282851    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:10:42.282869    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:10:42.293963    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:10:42.293976    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:10:42.312260    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:10:42.312269    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:10:42.326099    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:10:42.326113    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:10:42.338398    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:10:42.338412    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:10:42.350420    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:10:42.350431    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:10:44.874319    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:49.876365    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:49.876487    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:10:49.890225    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:10:49.890299    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:10:49.901583    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:10:49.901656    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:10:49.912437    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:10:49.912506    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:10:49.922823    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:10:49.922893    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:10:49.933026    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:10:49.933096    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:10:49.943547    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:10:49.943621    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:10:49.954131    4699 logs.go:276] 0 containers: []
	W0802 11:10:49.954152    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:10:49.954211    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:10:49.964687    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:10:49.964704    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:10:49.964710    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:10:49.980381    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:10:49.980392    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:10:49.992157    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:10:49.992169    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:10:50.013951    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:10:50.013968    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:10:50.027764    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:10:50.027774    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:10:50.031867    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:10:50.031873    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:10:50.068529    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:10:50.068548    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:10:50.080221    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:10:50.080234    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:10:50.092407    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:10:50.092420    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:10:50.137844    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:10:50.137858    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:10:50.151789    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:10:50.151802    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:10:50.166078    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:10:50.166091    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:10:50.191013    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:10:50.191024    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:10:50.230111    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:10:50.230126    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:10:50.242157    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:10:50.242170    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:10:50.257111    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:10:50.257124    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:10:50.268557    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:10:50.268568    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:10:52.782407    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:10:57.784584    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:10:57.784777    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:10:57.802096    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:10:57.802189    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:10:57.815540    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:10:57.815622    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:10:57.828840    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:10:57.828902    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:10:57.839861    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:10:57.839936    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:10:57.850450    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:10:57.850520    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:10:57.861027    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:10:57.861093    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:10:57.871194    4699 logs.go:276] 0 containers: []
	W0802 11:10:57.871205    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:10:57.871265    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:10:57.882842    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:10:57.882861    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:10:57.882866    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:10:57.920536    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:10:57.920546    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:10:57.957085    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:10:57.957097    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:10:57.971846    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:10:57.971861    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:10:58.010236    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:10:58.010252    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:10:58.023705    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:10:58.023715    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:10:58.048798    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:10:58.048814    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:10:58.065547    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:10:58.065559    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:10:58.077028    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:10:58.077041    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:10:58.098361    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:10:58.098371    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:10:58.110573    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:10:58.110582    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:10:58.123504    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:10:58.123516    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:10:58.134851    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:10:58.134867    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:10:58.138923    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:10:58.138930    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:10:58.151003    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:10:58.151017    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:10:58.165849    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:10:58.165859    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:10:58.177354    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:10:58.177366    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:00.702288    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:05.704543    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:05.704747    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:05.725637    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:11:05.725734    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:05.740908    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:11:05.740985    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:05.754827    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:11:05.754892    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:05.765944    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:11:05.766022    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:05.776786    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:11:05.776851    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:05.794245    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:11:05.794315    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:05.805285    4699 logs.go:276] 0 containers: []
	W0802 11:11:05.805298    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:05.805359    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:05.826773    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:11:05.826792    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:11:05.826797    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:11:05.840877    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:11:05.840888    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:11:05.857730    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:05.857740    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:05.896485    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:11:05.896495    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:11:05.910686    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:11:05.910697    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:11:05.924754    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:11:05.924767    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:11:05.936553    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:05.936563    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:05.962964    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:05.962972    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:05.996631    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:11:05.996642    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:11:06.034860    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:11:06.034870    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:11:06.046318    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:11:06.046328    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:11:06.064337    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:11:06.064350    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:06.076282    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:11:06.076292    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:11:06.088448    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:11:06.088460    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:11:06.105940    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:11:06.105951    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:11:06.119746    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:06.119757    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:06.124265    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:11:06.124271    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:11:08.637632    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:13.639839    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:13.640101    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:13.659771    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:11:13.659857    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:13.674816    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:11:13.674904    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:13.687104    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:11:13.687176    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:13.697641    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:11:13.697718    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:13.708294    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:11:13.708364    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:13.718974    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:11:13.719045    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:13.729432    4699 logs.go:276] 0 containers: []
	W0802 11:11:13.729446    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:13.729525    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:13.740189    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:11:13.740208    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:11:13.740213    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:11:13.754347    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:11:13.754358    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:11:13.767419    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:11:13.767431    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:11:13.779331    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:11:13.779342    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:11:13.797010    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:13.797021    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:13.833820    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:11:13.833832    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:11:13.872932    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:11:13.872943    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:11:13.885400    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:11:13.885413    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:11:13.904259    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:11:13.904270    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:11:13.923364    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:11:13.923376    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:13.935530    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:11:13.935542    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:11:13.947380    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:13.947390    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:13.970713    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:11:13.970722    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:11:13.984248    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:11:13.984259    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:11:13.996013    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:11:13.996027    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:11:14.009655    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:14.009665    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:14.047644    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:14.047652    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:16.554106    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:21.556457    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:21.556734    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:21.590640    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:11:21.590773    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:21.609627    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:11:21.609722    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:21.624191    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:11:21.624272    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:21.638666    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:11:21.638740    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:21.649282    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:11:21.649358    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:21.660147    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:11:21.660220    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:21.670709    4699 logs.go:276] 0 containers: []
	W0802 11:11:21.670721    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:21.670776    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:21.682194    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:11:21.682212    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:11:21.682217    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:11:21.696636    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:11:21.696649    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:11:21.712862    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:21.712875    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:21.736085    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:21.736092    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:21.771320    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:11:21.771332    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:11:21.810160    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:11:21.810170    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:11:21.821654    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:11:21.821666    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:11:21.833378    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:11:21.833389    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:11:21.845211    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:11:21.845222    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:21.857419    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:11:21.857432    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:11:21.872166    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:11:21.872176    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:11:21.887958    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:11:21.887968    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:11:21.905576    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:11:21.905585    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:11:21.919598    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:21.919608    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:21.957612    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:21.957624    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:21.961827    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:11:21.961835    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:11:21.973387    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:11:21.973397    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:11:24.486957    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:29.487237    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:29.487396    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:29.503956    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:11:29.504054    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:29.531384    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:11:29.531479    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:29.543239    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:11:29.543313    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:29.553873    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:11:29.553946    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:29.565081    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:11:29.565146    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:29.575492    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:11:29.575569    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:29.585597    4699 logs.go:276] 0 containers: []
	W0802 11:11:29.585609    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:29.585665    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:29.596949    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:11:29.596966    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:29.596972    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:29.634346    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:11:29.634357    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:11:29.650729    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:11:29.650743    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:11:29.665004    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:11:29.665017    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:11:29.676255    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:11:29.676267    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:11:29.698411    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:11:29.698425    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:11:29.710571    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:11:29.710586    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:11:29.722408    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:11:29.722423    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:29.736852    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:29.736862    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:29.741140    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:11:29.741146    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:11:29.781815    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:11:29.781826    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:11:29.793577    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:11:29.793589    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:11:29.811250    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:11:29.811264    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:11:29.828466    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:11:29.828480    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:11:29.846193    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:29.846209    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:29.871026    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:29.871035    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:29.907175    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:11:29.907186    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:11:32.423129    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:37.425192    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:37.425311    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:37.436491    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:11:37.436565    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:37.446981    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:11:37.447047    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:37.457196    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:11:37.457258    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:37.467708    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:11:37.467782    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:37.478847    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:11:37.478921    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:37.493471    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:11:37.493534    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:37.503702    4699 logs.go:276] 0 containers: []
	W0802 11:11:37.503712    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:37.503772    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:37.518256    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:11:37.518273    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:11:37.518278    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:11:37.532888    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:11:37.532899    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:11:37.550157    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:37.550166    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:37.574120    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:11:37.574129    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:11:37.585737    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:11:37.585752    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:11:37.599970    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:37.599981    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:37.639460    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:11:37.639475    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:11:37.653070    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:11:37.653083    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:11:37.689426    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:11:37.689438    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:11:37.701456    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:11:37.701468    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:11:37.715222    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:11:37.715236    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:37.727193    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:37.727207    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:37.731357    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:37.731364    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:37.764815    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:11:37.764828    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:11:37.776358    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:11:37.776369    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:11:37.792341    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:11:37.792353    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:11:37.807861    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:11:37.807872    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:11:40.321790    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:45.324097    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:45.324272    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:45.342673    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:11:45.342766    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:45.362261    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:11:45.362335    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:45.373650    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:11:45.373723    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:45.387586    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:11:45.387653    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:45.398724    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:11:45.398794    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:45.408912    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:11:45.409006    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:45.418808    4699 logs.go:276] 0 containers: []
	W0802 11:11:45.418818    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:45.418873    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:45.429723    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:11:45.429744    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:45.429750    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:45.433926    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:11:45.433935    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:11:45.445469    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:11:45.445479    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:11:45.457738    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:45.457752    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:45.481282    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:11:45.481289    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:11:45.518649    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:11:45.518662    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:11:45.532823    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:11:45.532834    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:11:45.548507    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:11:45.548519    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:45.560407    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:45.560418    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:45.599073    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:11:45.599083    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:11:45.610194    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:11:45.610206    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:11:45.621546    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:11:45.621558    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:11:45.639431    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:45.639444    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:45.677923    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:11:45.677936    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:11:45.694794    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:11:45.694805    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:11:45.712774    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:11:45.712785    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:11:45.724166    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:11:45.724176    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:11:48.241700    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:11:53.243888    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:11:53.243991    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:11:53.255166    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:11:53.255241    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:11:53.265971    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:11:53.266052    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:11:53.276918    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:11:53.276988    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:11:53.287820    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:11:53.287893    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:11:53.298961    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:11:53.299031    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:11:53.309494    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:11:53.309557    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:11:53.319843    4699 logs.go:276] 0 containers: []
	W0802 11:11:53.319858    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:11:53.319921    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:11:53.331020    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:11:53.331038    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:11:53.331044    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:11:53.345037    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:11:53.345051    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:11:53.384435    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:11:53.384445    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:11:53.395795    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:11:53.395808    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:11:53.408029    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:11:53.408040    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:11:53.445293    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:11:53.445302    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:11:53.459645    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:11:53.459658    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:11:53.474474    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:11:53.474485    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:11:53.493197    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:11:53.493208    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:11:53.505592    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:11:53.505605    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:11:53.523535    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:11:53.523549    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:11:53.535475    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:11:53.535488    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:11:53.555063    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:11:53.555078    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:11:53.591702    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:11:53.591717    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:11:53.602768    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:11:53.602780    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:11:53.616540    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:11:53.616553    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:11:53.642022    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:11:53.642033    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:11:56.148251    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:01.150711    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:01.151183    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:01.190711    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:12:01.190863    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:01.218151    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:12:01.218252    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:01.232901    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:12:01.232983    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:01.244850    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:12:01.244926    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:01.255324    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:12:01.255395    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:01.266012    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:12:01.266084    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:01.276303    4699 logs.go:276] 0 containers: []
	W0802 11:12:01.276318    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:01.276381    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:01.287455    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:12:01.287475    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:01.287480    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:01.325887    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:01.325896    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:01.329966    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:01.329972    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:01.364431    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:12:01.364442    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:12:01.379072    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:01.379083    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:01.402874    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:12:01.402882    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:12:01.414780    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:12:01.414793    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:12:01.430495    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:12:01.430505    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:12:01.447857    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:12:01.447870    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:12:01.485204    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:12:01.485214    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:12:01.499272    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:12:01.499283    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:12:01.511531    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:12:01.511544    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:12:01.522966    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:12:01.522981    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:01.536530    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:12:01.536539    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:12:01.553190    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:12:01.553201    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:12:01.567725    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:12:01.567735    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:12:01.581840    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:12:01.581850    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:12:04.095361    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:09.097966    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:09.098368    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:09.139585    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:12:09.139726    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:09.158897    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:12:09.159015    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:09.174115    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:12:09.174196    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:09.187029    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:12:09.187097    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:09.198045    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:12:09.198115    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:09.214414    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:12:09.214486    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:09.224496    4699 logs.go:276] 0 containers: []
	W0802 11:12:09.224507    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:09.224560    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:09.235719    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:12:09.235736    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:12:09.235741    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:12:09.247980    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:12:09.247990    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:12:09.260165    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:09.260176    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:09.284438    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:12:09.284444    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:12:09.296955    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:12:09.296968    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:12:09.313421    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:12:09.313435    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:12:09.352599    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:12:09.352615    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:12:09.364941    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:09.364955    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:09.400001    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:12:09.400012    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:12:09.417130    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:12:09.417141    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:12:09.433401    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:12:09.433413    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:12:09.450998    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:12:09.451013    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:12:09.464402    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:09.464413    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:09.502726    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:12:09.502732    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:12:09.517062    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:12:09.517073    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:12:09.528921    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:12:09.528933    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:09.543373    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:09.543385    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:12.049669    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:17.051751    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:17.051999    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:17.075137    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:12:17.075267    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:17.091570    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:12:17.091650    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:17.104997    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:12:17.105082    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:17.116219    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:12:17.116290    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:17.127188    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:12:17.127256    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:17.137933    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:12:17.138006    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:17.148109    4699 logs.go:276] 0 containers: []
	W0802 11:12:17.148126    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:17.148182    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:17.158883    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:12:17.158900    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:12:17.158906    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:12:17.171118    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:12:17.171131    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:12:17.182573    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:12:17.182587    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:17.196161    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:12:17.196174    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:12:17.236353    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:12:17.236369    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:12:17.248683    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:12:17.248693    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:12:17.260350    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:17.260362    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:17.283690    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:17.283700    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:17.322366    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:12:17.322382    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:12:17.338061    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:12:17.338083    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:12:17.360081    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:12:17.360097    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:12:17.372370    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:17.372381    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:17.377015    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:17.377022    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:17.413628    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:12:17.413640    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:12:17.428120    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:12:17.428131    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:12:17.441819    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:12:17.441830    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:12:17.457460    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:12:17.457471    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:12:19.973015    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:24.975254    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:24.975437    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:24.999821    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:12:24.999933    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:25.016692    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:12:25.016782    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:25.029405    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:12:25.029477    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:25.040363    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:12:25.040431    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:25.051180    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:12:25.051237    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:25.061720    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:12:25.061791    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:25.072263    4699 logs.go:276] 0 containers: []
	W0802 11:12:25.072276    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:25.072330    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:25.082945    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:12:25.082963    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:12:25.082968    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:12:25.094672    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:12:25.094683    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:12:25.105609    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:12:25.105621    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:12:25.119690    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:12:25.119700    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:12:25.131723    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:12:25.131735    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:12:25.143276    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:25.143287    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:25.167402    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:12:25.167408    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:25.179225    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:25.179239    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:25.218465    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:12:25.218475    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:12:25.232626    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:12:25.232636    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:12:25.270414    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:12:25.270425    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:12:25.285702    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:12:25.285714    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:12:25.299184    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:12:25.299197    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:12:25.317028    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:25.317045    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:25.321123    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:25.321130    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:25.356680    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:12:25.356694    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:12:25.370699    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:12:25.370709    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:12:27.888220    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:32.890428    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:32.890588    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:32.901197    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:12:32.901269    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:32.911984    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:12:32.912055    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:32.922487    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:12:32.922560    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:32.932762    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:12:32.932841    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:32.943451    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:12:32.943520    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:32.954311    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:12:32.954375    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:32.964571    4699 logs.go:276] 0 containers: []
	W0802 11:12:32.964585    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:32.964648    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:32.975391    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:12:32.975412    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:32.975417    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:33.012595    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:12:33.012602    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:12:33.048972    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:12:33.048982    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:12:33.060858    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:12:33.060869    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:12:33.072485    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:12:33.072498    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:12:33.086382    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:12:33.086394    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:33.099073    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:33.099084    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:33.138505    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:12:33.138517    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:12:33.152352    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:12:33.152362    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:12:33.166595    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:12:33.166606    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:12:33.181703    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:12:33.181715    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:12:33.192929    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:12:33.192941    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:12:33.208087    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:12:33.208099    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:12:33.225337    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:12:33.225349    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:12:33.236560    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:12:33.236569    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:12:33.247672    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:33.247684    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:33.270096    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:33.270103    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:35.775930    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:40.778123    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:40.778312    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:40.799767    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:12:40.799845    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:40.812313    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:12:40.812388    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:40.823102    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:12:40.823168    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:40.834015    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:12:40.834082    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:40.844596    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:12:40.844665    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:40.856137    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:12:40.856199    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:40.866623    4699 logs.go:276] 0 containers: []
	W0802 11:12:40.866633    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:40.866684    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:40.876886    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:12:40.876903    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:12:40.876909    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:12:40.914788    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:12:40.914800    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:12:40.928862    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:12:40.928871    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:12:40.942623    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:12:40.942632    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:12:40.954021    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:40.954031    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:40.990302    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:12:40.990310    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:12:41.001756    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:12:41.001767    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:12:41.013406    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:41.013417    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:41.036245    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:12:41.036253    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:41.047652    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:41.047664    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:41.052114    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:12:41.052121    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:12:41.066305    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:12:41.066318    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:12:41.077718    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:12:41.077731    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:12:41.092802    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:41.092813    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:41.126393    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:12:41.126403    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:12:41.140026    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:12:41.140039    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:12:41.151133    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:12:41.151143    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:12:43.675797    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:48.676797    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:48.676981    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:48.691453    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:12:48.691537    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:48.702637    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:12:48.702706    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:48.717624    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:12:48.717694    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:48.732858    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:12:48.732930    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:48.743161    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:12:48.743240    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:48.753647    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:12:48.753713    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:48.763628    4699 logs.go:276] 0 containers: []
	W0802 11:12:48.763644    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:48.763707    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:48.779759    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:12:48.779778    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:12:48.779784    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:12:48.795333    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:12:48.795344    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:12:48.811449    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:12:48.811463    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:12:48.829433    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:48.829445    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:48.834230    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:48.834238    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:48.868657    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:12:48.868671    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:12:48.906000    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:12:48.906011    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:12:48.919802    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:12:48.919813    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:12:48.937589    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:48.937600    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:48.959473    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:48.959481    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:48.996023    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:12:48.996032    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:12:49.007296    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:12:49.007307    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:12:49.021215    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:12:49.021225    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:12:49.032252    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:12:49.032263    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:49.043761    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:12:49.043773    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:12:49.059101    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:12:49.059110    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:12:49.070987    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:12:49.070997    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:12:51.585091    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:12:56.587688    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:12:56.587931    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:12:56.607205    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:12:56.607297    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:12:56.622205    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:12:56.622278    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:12:56.634313    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:12:56.634390    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:12:56.645395    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:12:56.645466    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:12:56.656360    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:12:56.656434    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:12:56.667219    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:12:56.667286    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:12:56.677611    4699 logs.go:276] 0 containers: []
	W0802 11:12:56.677622    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:12:56.677683    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:12:56.687948    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:12:56.687963    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:12:56.687968    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:12:56.702937    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:12:56.702946    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:12:56.715046    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:12:56.715057    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:12:56.732888    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:12:56.732898    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:12:56.756385    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:12:56.756392    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:12:56.769910    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:12:56.769923    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:12:56.781415    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:12:56.781426    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:12:56.818887    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:12:56.818899    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:12:56.857954    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:12:56.857981    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:12:56.892977    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:12:56.892989    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:12:56.909367    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:12:56.909379    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:12:56.924516    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:12:56.924527    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:12:56.942088    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:12:56.942098    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:12:56.979600    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:12:56.979611    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:12:56.984006    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:12:56.984012    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:12:56.997625    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:12:56.997641    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:12:57.009771    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:12:57.009784    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:12:59.523727    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:04.526187    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:04.526415    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:13:04.545505    4699 logs.go:276] 2 containers: [3900967269d0 c62a1899d653]
	I0802 11:13:04.545605    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:13:04.559176    4699 logs.go:276] 2 containers: [b908c04ddd33 beaa5f7a2b37]
	I0802 11:13:04.559256    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:13:04.571076    4699 logs.go:276] 1 containers: [0bd18bfcc865]
	I0802 11:13:04.571158    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:13:04.600618    4699 logs.go:276] 2 containers: [c3057e829452 241be9c6963f]
	I0802 11:13:04.600712    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:13:04.616571    4699 logs.go:276] 1 containers: [d4dd057bd4e4]
	I0802 11:13:04.616641    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:13:04.634026    4699 logs.go:276] 2 containers: [d91d8c4098fe 06f4cb7c5b7d]
	I0802 11:13:04.634109    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:13:04.645226    4699 logs.go:276] 0 containers: []
	W0802 11:13:04.645239    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:13:04.645310    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:13:04.659262    4699 logs.go:276] 2 containers: [6c08d12c4809 0a713b0cfd25]
	I0802 11:13:04.659282    4699 logs.go:123] Gathering logs for etcd [b908c04ddd33] ...
	I0802 11:13:04.659288    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b908c04ddd33"
	I0802 11:13:04.673419    4699 logs.go:123] Gathering logs for etcd [beaa5f7a2b37] ...
	I0802 11:13:04.673430    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 beaa5f7a2b37"
	I0802 11:13:04.688351    4699 logs.go:123] Gathering logs for coredns [0bd18bfcc865] ...
	I0802 11:13:04.688363    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bd18bfcc865"
	I0802 11:13:04.699621    4699 logs.go:123] Gathering logs for kube-scheduler [c3057e829452] ...
	I0802 11:13:04.699633    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3057e829452"
	I0802 11:13:04.712105    4699 logs.go:123] Gathering logs for kube-scheduler [241be9c6963f] ...
	I0802 11:13:04.712116    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 241be9c6963f"
	I0802 11:13:04.727365    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:13:04.727374    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:13:04.766569    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:13:04.766577    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:13:04.770835    4699 logs.go:123] Gathering logs for kube-apiserver [c62a1899d653] ...
	I0802 11:13:04.770842    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62a1899d653"
	I0802 11:13:04.810322    4699 logs.go:123] Gathering logs for kube-controller-manager [d91d8c4098fe] ...
	I0802 11:13:04.810332    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d91d8c4098fe"
	I0802 11:13:04.828020    4699 logs.go:123] Gathering logs for storage-provisioner [6c08d12c4809] ...
	I0802 11:13:04.828035    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08d12c4809"
	I0802 11:13:04.839287    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:13:04.839298    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:13:04.851194    4699 logs.go:123] Gathering logs for kube-controller-manager [06f4cb7c5b7d] ...
	I0802 11:13:04.851206    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f4cb7c5b7d"
	I0802 11:13:04.865252    4699 logs.go:123] Gathering logs for storage-provisioner [0a713b0cfd25] ...
	I0802 11:13:04.865262    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a713b0cfd25"
	I0802 11:13:04.876391    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:13:04.876402    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:13:04.914375    4699 logs.go:123] Gathering logs for kube-apiserver [3900967269d0] ...
	I0802 11:13:04.914385    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3900967269d0"
	I0802 11:13:04.932816    4699 logs.go:123] Gathering logs for kube-proxy [d4dd057bd4e4] ...
	I0802 11:13:04.932826    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4dd057bd4e4"
	I0802 11:13:04.945759    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:13:04.945769    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:13:07.470844    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:12.473313    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:12.473425    4699 kubeadm.go:597] duration metric: took 4m3.983610292s to restartPrimaryControlPlane
	W0802 11:13:12.473469    4699 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0802 11:13:12.473487    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0802 11:13:13.500225    4699 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.026761583s)
	I0802 11:13:13.500298    4699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 11:13:13.505354    4699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 11:13:13.508128    4699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 11:13:13.510985    4699 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 11:13:13.510991    4699 kubeadm.go:157] found existing configuration files:
	
	I0802 11:13:13.511014    4699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf
	I0802 11:13:13.514099    4699 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 11:13:13.514120    4699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 11:13:13.517323    4699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf
	I0802 11:13:13.520003    4699 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 11:13:13.520023    4699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 11:13:13.522785    4699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf
	I0802 11:13:13.526250    4699 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 11:13:13.526273    4699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 11:13:13.529414    4699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf
	I0802 11:13:13.532079    4699 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 11:13:13.532096    4699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 11:13:13.534775    4699 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 11:13:13.552647    4699 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0802 11:13:13.552723    4699 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 11:13:13.599546    4699 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 11:13:13.599644    4699 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 11:13:13.599715    4699 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 11:13:13.648011    4699 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 11:13:13.653213    4699 out.go:204]   - Generating certificates and keys ...
	I0802 11:13:13.653250    4699 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 11:13:13.653342    4699 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 11:13:13.653379    4699 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0802 11:13:13.653418    4699 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0802 11:13:13.653475    4699 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0802 11:13:13.653502    4699 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0802 11:13:13.653535    4699 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0802 11:13:13.653567    4699 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0802 11:13:13.653608    4699 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0802 11:13:13.653652    4699 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0802 11:13:13.653678    4699 kubeadm.go:310] [certs] Using the existing "sa" key
	I0802 11:13:13.653715    4699 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 11:13:13.864034    4699 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 11:13:13.958642    4699 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 11:13:14.085525    4699 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 11:13:14.136663    4699 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 11:13:14.165404    4699 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 11:13:14.165451    4699 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 11:13:14.165473    4699 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 11:13:14.256324    4699 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 11:13:14.259979    4699 out.go:204]   - Booting up control plane ...
	I0802 11:13:14.260025    4699 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 11:13:14.260062    4699 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 11:13:14.260096    4699 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 11:13:14.260138    4699 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 11:13:14.260236    4699 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0802 11:13:18.260056    4699 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001263 seconds
	I0802 11:13:18.260137    4699 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 11:13:18.264643    4699 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 11:13:18.775413    4699 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 11:13:18.775531    4699 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-387000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 11:13:19.281408    4699 kubeadm.go:310] [bootstrap-token] Using token: 2w8ki8.s5djwx0dmusw95zk
	I0802 11:13:19.287821    4699 out.go:204]   - Configuring RBAC rules ...
	I0802 11:13:19.287876    4699 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 11:13:19.287921    4699 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 11:13:19.294439    4699 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 11:13:19.295552    4699 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 11:13:19.296666    4699 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 11:13:19.297759    4699 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 11:13:19.301385    4699 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 11:13:19.477277    4699 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 11:13:19.685878    4699 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 11:13:19.686409    4699 kubeadm.go:310] 
	I0802 11:13:19.686441    4699 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 11:13:19.686444    4699 kubeadm.go:310] 
	I0802 11:13:19.686488    4699 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 11:13:19.686494    4699 kubeadm.go:310] 
	I0802 11:13:19.686506    4699 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 11:13:19.686536    4699 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 11:13:19.686567    4699 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 11:13:19.686569    4699 kubeadm.go:310] 
	I0802 11:13:19.686595    4699 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 11:13:19.686597    4699 kubeadm.go:310] 
	I0802 11:13:19.686651    4699 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 11:13:19.686670    4699 kubeadm.go:310] 
	I0802 11:13:19.686718    4699 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 11:13:19.686759    4699 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 11:13:19.686822    4699 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 11:13:19.686826    4699 kubeadm.go:310] 
	I0802 11:13:19.686867    4699 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 11:13:19.686905    4699 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 11:13:19.686907    4699 kubeadm.go:310] 
	I0802 11:13:19.686947    4699 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2w8ki8.s5djwx0dmusw95zk \
	I0802 11:13:19.686998    4699 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f9320a40b5936daeb22249c1a98fe573be47e358012961e7ff0a8e7d01ac6b4d \
	I0802 11:13:19.687008    4699 kubeadm.go:310] 	--control-plane 
	I0802 11:13:19.687011    4699 kubeadm.go:310] 
	I0802 11:13:19.687061    4699 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 11:13:19.687064    4699 kubeadm.go:310] 
	I0802 11:13:19.687109    4699 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2w8ki8.s5djwx0dmusw95zk \
	I0802 11:13:19.687187    4699 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f9320a40b5936daeb22249c1a98fe573be47e358012961e7ff0a8e7d01ac6b4d 
	I0802 11:13:19.687302    4699 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 11:13:19.687351    4699 cni.go:84] Creating CNI manager for ""
	I0802 11:13:19.687360    4699 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:13:19.691622    4699 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 11:13:19.699669    4699 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 11:13:19.702714    4699 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 11:13:19.708737    4699 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 11:13:19.708802    4699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 11:13:19.708803    4699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-387000 minikube.k8s.io/updated_at=2024_08_02T11_13_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=stopped-upgrade-387000 minikube.k8s.io/primary=true
	I0802 11:13:19.745656    4699 kubeadm.go:1113] duration metric: took 36.899834ms to wait for elevateKubeSystemPrivileges
	I0802 11:13:19.745669    4699 ops.go:34] apiserver oom_adj: -16
	I0802 11:13:19.745675    4699 kubeadm.go:394] duration metric: took 4m11.269257084s to StartCluster
	I0802 11:13:19.745685    4699 settings.go:142] acquiring lock: {Name:mke9d9a6b3c42219545f5aed5860e740f1b28aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:13:19.745780    4699 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:13:19.746180    4699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/kubeconfig: {Name:mkee875f598bd0a8f78c04f09a48257e74d5dd54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:13:19.746378    4699 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:13:19.746409    4699 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 11:13:19.746462    4699 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-387000"
	I0802 11:13:19.746471    4699 config.go:182] Loaded profile config "stopped-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:13:19.746477    4699 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-387000"
	W0802 11:13:19.746481    4699 addons.go:243] addon storage-provisioner should already be in state true
	I0802 11:13:19.746480    4699 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-387000"
	I0802 11:13:19.746494    4699 host.go:66] Checking if "stopped-upgrade-387000" exists ...
	I0802 11:13:19.746534    4699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-387000"
	I0802 11:13:19.747773    4699 kapi.go:59] client config for stopped-upgrade-387000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/stopped-upgrade-387000/client.key", CAFile:"/Users/jenkins/minikube-integration/19355-1243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103e641b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0802 11:13:19.747894    4699 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-387000"
	W0802 11:13:19.747903    4699 addons.go:243] addon default-storageclass should already be in state true
	I0802 11:13:19.747909    4699 host.go:66] Checking if "stopped-upgrade-387000" exists ...
	I0802 11:13:19.750615    4699 out.go:177] * Verifying Kubernetes components...
	I0802 11:13:19.750991    4699 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 11:13:19.754707    4699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 11:13:19.754714    4699 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/id_rsa Username:docker}
	I0802 11:13:19.758566    4699 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 11:13:19.762637    4699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 11:13:19.766569    4699 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 11:13:19.766575    4699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 11:13:19.766581    4699 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/stopped-upgrade-387000/id_rsa Username:docker}
	I0802 11:13:19.844550    4699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 11:13:19.849537    4699 api_server.go:52] waiting for apiserver process to appear ...
	I0802 11:13:19.849578    4699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 11:13:19.853425    4699 api_server.go:72] duration metric: took 107.036916ms to wait for apiserver process to appear ...
	I0802 11:13:19.853433    4699 api_server.go:88] waiting for apiserver healthz status ...
	I0802 11:13:19.853439    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:19.871198    4699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 11:13:19.890578    4699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 11:13:24.854766    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:24.854789    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:29.855165    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:29.855221    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:34.855231    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:34.855250    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:39.855380    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:39.855453    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:44.855883    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:44.855911    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:49.856256    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:49.856276    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0802 11:13:50.214781    4699 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0802 11:13:50.218594    4699 out.go:177] * Enabled addons: storage-provisioner
	I0802 11:13:50.226595    4699 addons.go:510] duration metric: took 30.481269083s for enable addons: enabled=[storage-provisioner]
	I0802 11:13:54.856799    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:54.856845    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:13:59.857351    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:13:59.857389    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:14:04.858107    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:14:04.858131    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:14:09.859215    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:14:09.859260    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:14:14.859841    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:14:14.859864    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:14:19.861440    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:14:19.861563    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:14:19.893369    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:14:19.893447    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:14:19.904268    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:14:19.904328    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:14:19.915086    4699 logs.go:276] 2 containers: [fcd3d546ebf9 038abf581477]
	I0802 11:14:19.915156    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:14:19.929349    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:14:19.929419    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:14:19.939850    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:14:19.939914    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:14:19.950994    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:14:19.951063    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:14:19.961121    4699 logs.go:276] 0 containers: []
	W0802 11:14:19.961132    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:14:19.961188    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:14:19.971793    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:14:19.971808    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:14:19.971814    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:14:19.976327    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:14:19.976337    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:14:19.991130    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:14:19.991141    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:14:20.004656    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:14:20.004667    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:14:20.016016    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:14:20.016033    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:14:20.034477    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:14:20.034496    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:14:20.047063    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:14:20.047073    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:14:20.070308    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:14:20.070314    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:14:20.084678    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:14:20.084690    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:14:20.118566    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:14:20.118574    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:14:20.156555    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:14:20.156567    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:14:20.171362    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:14:20.171373    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:14:20.183586    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:14:20.183598    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:14:22.701059    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:14:27.703211    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:14:27.703305    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:14:27.714723    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:14:27.714801    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:14:27.725294    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:14:27.725369    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:14:27.735656    4699 logs.go:276] 2 containers: [fcd3d546ebf9 038abf581477]
	I0802 11:14:27.735733    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:14:27.746260    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:14:27.746320    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:14:27.756604    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:14:27.756670    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:14:27.767176    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:14:27.767243    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:14:27.777279    4699 logs.go:276] 0 containers: []
	W0802 11:14:27.777293    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:14:27.777356    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:14:27.791600    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:14:27.791615    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:14:27.791621    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:14:27.828025    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:14:27.828038    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:14:27.845560    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:14:27.845574    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:14:27.857237    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:14:27.857247    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:14:27.872354    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:14:27.872365    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:14:27.883689    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:14:27.883700    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:14:27.888188    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:14:27.888195    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:14:27.924289    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:14:27.924300    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:14:27.939068    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:14:27.939080    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:14:27.951203    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:14:27.951215    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:14:27.968743    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:14:27.968753    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:14:27.986120    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:14:27.986129    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:14:28.009823    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:14:28.009830    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:14:30.524230    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:14:35.526393    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:14:35.526569    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:14:35.544979    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:14:35.545065    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:14:35.557650    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:14:35.557729    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:14:35.571697    4699 logs.go:276] 2 containers: [fcd3d546ebf9 038abf581477]
	I0802 11:14:35.571770    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:14:35.582019    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:14:35.582088    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:14:35.592335    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:14:35.592427    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:14:35.602584    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:14:35.602659    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:14:35.612918    4699 logs.go:276] 0 containers: []
	W0802 11:14:35.612929    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:14:35.612990    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:14:35.623513    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:14:35.623527    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:14:35.623533    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:14:35.640722    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:14:35.640732    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:14:35.651862    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:14:35.651874    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:14:35.686153    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:14:35.686163    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:14:35.699788    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:14:35.699798    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:14:35.711147    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:14:35.711158    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:14:35.729561    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:14:35.729572    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:14:35.740955    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:14:35.740965    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:14:35.765999    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:14:35.766008    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:14:35.779501    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:14:35.779513    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:14:35.784181    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:14:35.784190    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:14:35.819282    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:14:35.819293    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:14:35.833852    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:14:35.833862    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:14:38.346039    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:14:43.348214    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:14:43.348661    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:14:43.387579    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:14:43.387717    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:14:43.410255    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:14:43.410355    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:14:43.428253    4699 logs.go:276] 2 containers: [fcd3d546ebf9 038abf581477]
	I0802 11:14:43.428332    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:14:43.440893    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:14:43.440961    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:14:43.451605    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:14:43.451679    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:14:43.462433    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:14:43.462493    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:14:43.472826    4699 logs.go:276] 0 containers: []
	W0802 11:14:43.472839    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:14:43.472898    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:14:43.488165    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:14:43.488178    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:14:43.488182    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:14:43.511385    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:14:43.511392    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:14:43.522983    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:14:43.522997    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:14:43.558236    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:14:43.558248    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:14:43.575724    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:14:43.575741    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:14:43.590070    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:14:43.590082    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:14:43.601621    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:14:43.601632    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:14:43.613288    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:14:43.613299    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:14:43.627925    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:14:43.627935    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:14:43.632314    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:14:43.632322    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:14:43.670198    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:14:43.670209    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:14:43.686650    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:14:43.686661    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:14:43.710335    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:14:43.710351    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:14:46.227015    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:14:51.228473    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:14:51.228691    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:14:51.255572    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:14:51.255700    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:14:51.272764    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:14:51.272858    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:14:51.286367    4699 logs.go:276] 2 containers: [fcd3d546ebf9 038abf581477]
	I0802 11:14:51.286452    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:14:51.297773    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:14:51.297840    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:14:51.308030    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:14:51.308089    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:14:51.321334    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:14:51.321398    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:14:51.331004    4699 logs.go:276] 0 containers: []
	W0802 11:14:51.331015    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:14:51.331066    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:14:51.342326    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:14:51.342344    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:14:51.342349    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:14:51.347258    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:14:51.347266    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:14:51.359211    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:14:51.359219    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:14:51.370755    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:14:51.370765    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:14:51.382310    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:14:51.382321    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:14:51.396946    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:14:51.396957    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:14:51.408623    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:14:51.408633    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:14:51.442445    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:14:51.442454    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:14:51.476711    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:14:51.476721    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:14:51.490738    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:14:51.490752    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:14:51.507910    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:14:51.507921    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:14:51.525648    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:14:51.525659    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:14:51.548970    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:14:51.548980    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:14:54.062099    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:14:59.062907    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:14:59.063035    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:14:59.080471    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:14:59.080540    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:14:59.097562    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:14:59.097631    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:14:59.108320    4699 logs.go:276] 2 containers: [fcd3d546ebf9 038abf581477]
	I0802 11:14:59.108384    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:14:59.118501    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:14:59.118572    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:14:59.128889    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:14:59.128954    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:14:59.139121    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:14:59.139190    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:14:59.149910    4699 logs.go:276] 0 containers: []
	W0802 11:14:59.149924    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:14:59.149979    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:14:59.160970    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:14:59.160988    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:14:59.160993    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:14:59.172849    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:14:59.172870    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:14:59.184197    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:14:59.184209    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:14:59.207757    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:14:59.207767    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:14:59.218946    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:14:59.218956    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:14:59.254710    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:14:59.254725    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:14:59.269448    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:14:59.269461    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:14:59.280860    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:14:59.280873    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:14:59.295204    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:14:59.295217    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:14:59.312709    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:14:59.312719    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:14:59.346543    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:14:59.346552    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:14:59.350580    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:14:59.350590    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:14:59.363976    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:14:59.363986    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:15:01.877768    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:15:06.880353    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:15:06.880591    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:15:06.908043    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:15:06.908154    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:15:06.926208    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:15:06.926281    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:15:06.940009    4699 logs.go:276] 2 containers: [fcd3d546ebf9 038abf581477]
	I0802 11:15:06.940082    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:15:06.951670    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:15:06.951740    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:15:06.961976    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:15:06.962041    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:15:06.972613    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:15:06.972680    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:15:06.983013    4699 logs.go:276] 0 containers: []
	W0802 11:15:06.983025    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:15:06.983086    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:15:06.993547    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:15:06.993561    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:15:06.993566    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:15:07.010974    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:15:07.010985    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:15:07.024807    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:15:07.024820    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:15:07.058160    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:15:07.058174    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:15:07.069695    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:15:07.069705    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:15:07.080771    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:15:07.080782    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:15:07.094843    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:15:07.094854    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:15:07.113778    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:15:07.113790    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:15:07.125262    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:15:07.125276    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:15:07.140789    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:15:07.140803    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:15:07.165081    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:15:07.165088    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:15:07.200383    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:15:07.200391    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:15:07.204426    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:15:07.204433    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:15:09.719052    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:15:14.721229    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:15:14.721480    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:15:14.762223    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:15:14.762344    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:15:14.779701    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:15:14.779774    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:15:14.792952    4699 logs.go:276] 2 containers: [fcd3d546ebf9 038abf581477]
	I0802 11:15:14.793026    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:15:14.804281    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:15:14.804352    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:15:14.814813    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:15:14.814876    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:15:14.825157    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:15:14.825231    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:15:14.834871    4699 logs.go:276] 0 containers: []
	W0802 11:15:14.834881    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:15:14.834933    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:15:14.850677    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:15:14.850693    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:15:14.850698    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:15:14.885829    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:15:14.885837    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:15:14.890376    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:15:14.890385    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:15:14.904411    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:15:14.904421    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:15:14.918759    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:15:14.918771    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:15:14.930326    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:15:14.930340    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:15:14.945461    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:15:14.945471    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:15:14.961445    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:15:14.961457    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:15:14.999928    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:15:14.999942    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:15:15.012446    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:15:15.012457    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:15:15.031742    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:15:15.031756    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:15:15.043214    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:15:15.043226    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:15:15.067911    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:15:15.067918    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:15:17.581111    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:15:22.582270    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:15:22.582691    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:15:22.619122    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:15:22.619256    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:15:22.639476    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:15:22.639578    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:15:22.654246    4699 logs.go:276] 4 containers: [261e141dca26 dc8e3e7ebf20 fcd3d546ebf9 038abf581477]
	I0802 11:15:22.654323    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:15:22.666921    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:15:22.666992    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:15:22.677905    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:15:22.677969    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:15:22.692036    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:15:22.692098    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:15:22.702523    4699 logs.go:276] 0 containers: []
	W0802 11:15:22.702534    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:15:22.702592    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:15:22.713473    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:15:22.713492    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:15:22.713499    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:15:22.747857    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:15:22.747865    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:15:22.763235    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:15:22.763245    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:15:22.775439    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:15:22.775449    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:15:22.810410    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:15:22.810424    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:15:22.831051    4699 logs.go:123] Gathering logs for coredns [dc8e3e7ebf20] ...
	I0802 11:15:22.831064    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc8e3e7ebf20"
	I0802 11:15:22.847781    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:15:22.847794    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:15:22.865048    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:15:22.865058    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:15:22.869587    4699 logs.go:123] Gathering logs for coredns [261e141dca26] ...
	I0802 11:15:22.869596    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261e141dca26"
	I0802 11:15:22.880654    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:15:22.880672    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:15:22.892772    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:15:22.892782    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:15:22.904928    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:15:22.904942    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:15:22.919024    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:15:22.919036    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:15:22.930888    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:15:22.930897    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:15:22.956398    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:15:22.956405    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:15:25.469410    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:15:30.471609    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:15:30.472110    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:15:30.506876    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:15:30.507010    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:15:30.527354    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:15:30.527450    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:15:30.542225    4699 logs.go:276] 4 containers: [261e141dca26 dc8e3e7ebf20 fcd3d546ebf9 038abf581477]
	I0802 11:15:30.542303    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:15:30.553789    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:15:30.553862    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:15:30.564388    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:15:30.564458    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:15:30.574807    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:15:30.574873    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:15:30.584935    4699 logs.go:276] 0 containers: []
	W0802 11:15:30.584946    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:15:30.585007    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:15:30.595610    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:15:30.595625    4699 logs.go:123] Gathering logs for coredns [261e141dca26] ...
	I0802 11:15:30.595630    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261e141dca26"
	I0802 11:15:30.607383    4699 logs.go:123] Gathering logs for coredns [dc8e3e7ebf20] ...
	I0802 11:15:30.607395    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc8e3e7ebf20"
	I0802 11:15:30.619003    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:15:30.619013    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:15:30.632477    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:15:30.632488    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:15:30.647907    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:15:30.647919    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:15:30.665540    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:15:30.665553    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:15:30.692902    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:15:30.692915    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:15:30.698509    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:15:30.698520    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:15:30.735267    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:15:30.735277    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:15:30.750529    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:15:30.750540    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:15:30.764806    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:15:30.764818    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:15:30.776593    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:15:30.776607    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:15:30.809770    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:15:30.809778    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:15:30.821344    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:15:30.821356    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:15:30.832679    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:15:30.832694    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:15:33.346186    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:15:38.348465    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:15:38.348732    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:15:38.375427    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:15:38.375551    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:15:38.392173    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:15:38.392245    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:15:38.409486    4699 logs.go:276] 4 containers: [261e141dca26 dc8e3e7ebf20 fcd3d546ebf9 038abf581477]
	I0802 11:15:38.409564    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:15:38.420469    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:15:38.420534    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:15:38.431225    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:15:38.431298    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:15:38.441677    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:15:38.441746    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:15:38.452056    4699 logs.go:276] 0 containers: []
	W0802 11:15:38.452067    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:15:38.452129    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:15:38.465394    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:15:38.465413    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:15:38.465417    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:15:38.501291    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:15:38.501299    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:15:38.505452    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:15:38.505461    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:15:38.519467    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:15:38.519477    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:15:38.531349    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:15:38.531364    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:15:38.550150    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:15:38.550161    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:15:38.586862    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:15:38.586876    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:15:38.600883    4699 logs.go:123] Gathering logs for coredns [261e141dca26] ...
	I0802 11:15:38.600896    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261e141dca26"
	I0802 11:15:38.612407    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:15:38.612416    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:15:38.623843    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:15:38.623857    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:15:38.636294    4699 logs.go:123] Gathering logs for coredns [dc8e3e7ebf20] ...
	I0802 11:15:38.636307    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc8e3e7ebf20"
	I0802 11:15:38.647822    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:15:38.647832    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:15:38.661835    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:15:38.661845    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:15:38.673462    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:15:38.673473    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:15:38.697381    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:15:38.697391    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:15:41.211091    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:15:46.213539    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:15:46.213898    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:15:46.249433    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:15:46.249567    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:15:46.270592    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:15:46.270709    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:15:46.285734    4699 logs.go:276] 4 containers: [261e141dca26 dc8e3e7ebf20 fcd3d546ebf9 038abf581477]
	I0802 11:15:46.285805    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:15:46.297903    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:15:46.297973    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:15:46.308784    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:15:46.308856    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:15:46.319666    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:15:46.319735    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:15:46.329960    4699 logs.go:276] 0 containers: []
	W0802 11:15:46.329970    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:15:46.330025    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:15:46.340552    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:15:46.340568    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:15:46.340573    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:15:46.355922    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:15:46.355933    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:15:46.370261    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:15:46.370270    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:15:46.391679    4699 logs.go:123] Gathering logs for coredns [261e141dca26] ...
	I0802 11:15:46.391691    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261e141dca26"
	I0802 11:15:46.403570    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:15:46.403581    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:15:46.415306    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:15:46.415318    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:15:46.427223    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:15:46.427232    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:15:46.444785    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:15:46.444793    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:15:46.456469    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:15:46.456482    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:15:46.491244    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:15:46.491257    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:15:46.503367    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:15:46.503378    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:15:46.515736    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:15:46.515747    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:15:46.549311    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:15:46.549319    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:15:46.553398    4699 logs.go:123] Gathering logs for coredns [dc8e3e7ebf20] ...
	I0802 11:15:46.553403    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc8e3e7ebf20"
	I0802 11:15:46.565573    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:15:46.565584    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:15:49.092953    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:15:54.093959    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:15:54.094363    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:15:54.130247    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:15:54.130383    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:15:54.150884    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:15:54.150980    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:15:54.165349    4699 logs.go:276] 4 containers: [261e141dca26 dc8e3e7ebf20 fcd3d546ebf9 038abf581477]
	I0802 11:15:54.165424    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:15:54.177556    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:15:54.177624    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:15:54.189292    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:15:54.189360    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:15:54.204332    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:15:54.204402    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:15:54.214349    4699 logs.go:276] 0 containers: []
	W0802 11:15:54.214361    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:15:54.214418    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:15:54.224822    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:15:54.224836    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:15:54.224843    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:15:54.239323    4699 logs.go:123] Gathering logs for coredns [261e141dca26] ...
	I0802 11:15:54.239335    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261e141dca26"
	I0802 11:15:54.250945    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:15:54.250957    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:15:54.262425    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:15:54.262439    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:15:54.273887    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:15:54.273900    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:15:54.278474    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:15:54.278482    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:15:54.296211    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:15:54.296222    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:15:54.308127    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:15:54.308139    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:15:54.333262    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:15:54.333269    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:15:54.345577    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:15:54.345590    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:15:54.357151    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:15:54.357161    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:15:54.373654    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:15:54.373665    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:15:54.410284    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:15:54.410291    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:15:54.443806    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:15:54.443819    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:15:54.457732    4699 logs.go:123] Gathering logs for coredns [dc8e3e7ebf20] ...
	I0802 11:15:54.457745    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc8e3e7ebf20"
	I0802 11:15:56.971552    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:16:01.973587    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:16:01.973724    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:16:01.986381    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:16:01.986451    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:16:01.997049    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:16:01.997122    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:16:02.007537    4699 logs.go:276] 4 containers: [261e141dca26 dc8e3e7ebf20 fcd3d546ebf9 038abf581477]
	I0802 11:16:02.007608    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:16:02.024785    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:16:02.024856    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:16:02.034997    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:16:02.035060    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:16:02.045649    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:16:02.045708    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:16:02.056134    4699 logs.go:276] 0 containers: []
	W0802 11:16:02.056145    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:16:02.056216    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:16:02.066519    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:16:02.066536    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:16:02.066541    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:16:02.080459    4699 logs.go:123] Gathering logs for coredns [dc8e3e7ebf20] ...
	I0802 11:16:02.080472    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc8e3e7ebf20"
	I0802 11:16:02.092473    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:16:02.092483    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:16:02.105790    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:16:02.105804    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:16:02.141375    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:16:02.141384    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:16:02.155439    4699 logs.go:123] Gathering logs for coredns [261e141dca26] ...
	I0802 11:16:02.155450    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261e141dca26"
	I0802 11:16:02.167072    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:16:02.167082    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:16:02.178547    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:16:02.178560    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:16:02.190065    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:16:02.190076    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:16:02.212490    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:16:02.212501    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:16:02.239925    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:16:02.239935    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:16:02.264724    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:16:02.264748    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:16:02.299550    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:16:02.299556    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:16:02.303813    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:16:02.303818    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:16:02.315691    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:16:02.315702    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:16:04.827928    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:16:09.829031    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:16:09.829358    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:16:09.852517    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:16:09.852612    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:16:09.869749    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:16:09.869829    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:16:09.883495    4699 logs.go:276] 4 containers: [261e141dca26 dc8e3e7ebf20 fcd3d546ebf9 038abf581477]
	I0802 11:16:09.883574    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:16:09.894212    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:16:09.894279    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:16:09.908542    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:16:09.908613    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:16:09.919293    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:16:09.919361    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:16:09.929742    4699 logs.go:276] 0 containers: []
	W0802 11:16:09.929755    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:16:09.929809    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:16:09.940031    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:16:09.940048    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:16:09.940054    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:16:09.978234    4699 logs.go:123] Gathering logs for coredns [261e141dca26] ...
	I0802 11:16:09.978243    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261e141dca26"
	I0802 11:16:09.990199    4699 logs.go:123] Gathering logs for coredns [dc8e3e7ebf20] ...
	I0802 11:16:09.990210    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc8e3e7ebf20"
	I0802 11:16:10.001802    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:16:10.001814    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:16:10.027538    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:16:10.027549    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:16:10.039796    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:16:10.039806    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:16:10.051855    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:16:10.051866    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:16:10.063063    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:16:10.063073    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:16:10.078089    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:16:10.078100    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:16:10.095639    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:16:10.095651    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:16:10.106985    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:16:10.106995    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:16:10.142190    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:16:10.142198    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:16:10.146660    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:16:10.146667    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:16:10.160302    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:16:10.160314    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:16:10.173585    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:16:10.173597    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:16:12.690384    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:16:17.692497    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:16:17.692992    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:16:17.729822    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:16:17.729956    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:16:17.751244    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:16:17.751359    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:16:17.766854    4699 logs.go:276] 4 containers: [261e141dca26 dc8e3e7ebf20 fcd3d546ebf9 038abf581477]
	I0802 11:16:17.766938    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:16:17.779910    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:16:17.779983    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:16:17.790446    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:16:17.790514    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:16:17.801036    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:16:17.801104    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:16:17.811678    4699 logs.go:276] 0 containers: []
	W0802 11:16:17.811688    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:16:17.811748    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:16:17.822219    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:16:17.822237    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:16:17.822243    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:16:17.840657    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:16:17.840670    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:16:17.852525    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:16:17.852534    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:16:17.863859    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:16:17.863868    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:16:17.887925    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:16:17.887940    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:16:17.892089    4699 logs.go:123] Gathering logs for coredns [dc8e3e7ebf20] ...
	I0802 11:16:17.892096    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc8e3e7ebf20"
	I0802 11:16:17.907962    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:16:17.907972    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:16:17.923715    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:16:17.923726    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:16:17.957922    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:16:17.957930    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:16:17.992624    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:16:17.992635    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:16:18.006738    4699 logs.go:123] Gathering logs for coredns [261e141dca26] ...
	I0802 11:16:18.006750    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261e141dca26"
	I0802 11:16:18.019741    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:16:18.019752    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:16:18.031322    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:16:18.031333    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:16:18.043032    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:16:18.043042    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:16:18.060205    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:16:18.060218    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:16:20.576702    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:16:25.577647    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:16:25.577729    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:16:25.591187    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:16:25.591250    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:16:25.602729    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:16:25.602791    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:16:25.615143    4699 logs.go:276] 4 containers: [261e141dca26 dc8e3e7ebf20 fcd3d546ebf9 038abf581477]
	I0802 11:16:25.615216    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:16:25.627908    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:16:25.627947    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:16:25.639194    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:16:25.639249    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:16:25.649857    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:16:25.649910    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:16:25.660262    4699 logs.go:276] 0 containers: []
	W0802 11:16:25.660273    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:16:25.660322    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:16:25.671340    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:16:25.671355    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:16:25.671360    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:16:25.675900    4699 logs.go:123] Gathering logs for coredns [261e141dca26] ...
	I0802 11:16:25.675909    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261e141dca26"
	I0802 11:16:25.697504    4699 logs.go:123] Gathering logs for coredns [dc8e3e7ebf20] ...
	I0802 11:16:25.697515    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc8e3e7ebf20"
	I0802 11:16:25.712551    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:16:25.712562    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:16:25.724723    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:16:25.724733    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:16:25.736776    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:16:25.736786    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:16:25.755364    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:16:25.755373    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:16:25.789719    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:16:25.789729    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:16:25.808420    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:16:25.808431    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:16:25.823566    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:16:25.823576    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:16:25.836074    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:16:25.836086    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:16:25.859333    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:16:25.859349    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:16:25.885745    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:16:25.885765    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:16:25.926457    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:16:25.926471    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:16:25.942956    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:16:25.942970    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:16:28.457432    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:16:33.459543    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:16:33.459657    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:16:33.471109    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:16:33.471178    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:16:33.481347    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:16:33.481412    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:16:33.491629    4699 logs.go:276] 4 containers: [261e141dca26 dc8e3e7ebf20 fcd3d546ebf9 038abf581477]
	I0802 11:16:33.491698    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:16:33.502065    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:16:33.502126    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:16:33.512340    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:16:33.512406    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:16:33.522636    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:16:33.522707    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:16:33.533048    4699 logs.go:276] 0 containers: []
	W0802 11:16:33.533059    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:16:33.533110    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:16:33.543563    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:16:33.543584    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:16:33.543589    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:16:33.548240    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:16:33.548245    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:16:33.571892    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:16:33.571901    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:16:33.584098    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:16:33.584113    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:16:33.618578    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:16:33.618589    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:16:33.633257    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:16:33.633270    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:16:33.655157    4699 logs.go:123] Gathering logs for coredns [dc8e3e7ebf20] ...
	I0802 11:16:33.655170    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc8e3e7ebf20"
	I0802 11:16:33.666656    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:16:33.666669    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:16:33.684431    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:16:33.684442    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:16:33.696088    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:16:33.696099    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:16:33.731283    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:16:33.731290    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:16:33.743080    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:16:33.743092    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:16:33.754352    4699 logs.go:123] Gathering logs for coredns [261e141dca26] ...
	I0802 11:16:33.754363    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261e141dca26"
	I0802 11:16:33.766153    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:16:33.766167    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:16:33.777954    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:16:33.777966    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:16:36.292827    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:16:41.294936    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:16:41.295415    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:16:41.336418    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:16:41.336550    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:16:41.359335    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:16:41.359453    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:16:41.375330    4699 logs.go:276] 4 containers: [261e141dca26 dc8e3e7ebf20 fcd3d546ebf9 038abf581477]
	I0802 11:16:41.375407    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:16:41.387873    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:16:41.387954    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:16:41.398886    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:16:41.398952    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:16:41.409287    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:16:41.409360    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:16:41.419706    4699 logs.go:276] 0 containers: []
	W0802 11:16:41.419717    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:16:41.419774    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:16:41.430155    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:16:41.430171    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:16:41.430176    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:16:41.474766    4699 logs.go:123] Gathering logs for coredns [261e141dca26] ...
	I0802 11:16:41.474779    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261e141dca26"
	I0802 11:16:41.486572    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:16:41.486583    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:16:41.498557    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:16:41.498569    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:16:41.512372    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:16:41.512383    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:16:41.526455    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:16:41.526466    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:16:41.544228    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:16:41.544237    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:16:41.557393    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:16:41.557406    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:16:41.564626    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:16:41.564636    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:16:41.579016    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:16:41.579028    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:16:41.594989    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:16:41.595000    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:16:41.619620    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:16:41.619628    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:16:41.654448    4699 logs.go:123] Gathering logs for coredns [dc8e3e7ebf20] ...
	I0802 11:16:41.654455    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc8e3e7ebf20"
	I0802 11:16:41.666260    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:16:41.666272    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:16:41.678127    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:16:41.678136    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:16:44.195428    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:16:49.196405    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:16:49.196499    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:16:49.212564    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:16:49.212629    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:16:49.224607    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:16:49.224652    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:16:49.235763    4699 logs.go:276] 4 containers: [261e141dca26 dc8e3e7ebf20 fcd3d546ebf9 038abf581477]
	I0802 11:16:49.235826    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:16:49.247250    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:16:49.247313    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:16:49.259720    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:16:49.259772    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:16:49.271053    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:16:49.271108    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:16:49.281507    4699 logs.go:276] 0 containers: []
	W0802 11:16:49.281521    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:16:49.281567    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:16:49.300110    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:16:49.300126    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:16:49.300132    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:16:49.319235    4699 logs.go:123] Gathering logs for coredns [261e141dca26] ...
	I0802 11:16:49.319246    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261e141dca26"
	I0802 11:16:49.332971    4699 logs.go:123] Gathering logs for coredns [dc8e3e7ebf20] ...
	I0802 11:16:49.332982    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc8e3e7ebf20"
	I0802 11:16:49.347402    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:16:49.347414    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:16:49.359962    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:16:49.359971    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:16:49.384041    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:16:49.384054    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:16:49.400173    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:16:49.400185    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:16:49.418096    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:16:49.418108    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:16:49.431976    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:16:49.431987    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:16:49.469068    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:16:49.469083    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:16:49.474621    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:16:49.474634    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:16:49.516806    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:16:49.516816    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:16:49.533083    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:16:49.533091    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:16:49.549866    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:16:49.549877    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:16:49.565314    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:16:49.565326    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:16:52.083055    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:16:57.086385    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:16:57.086868    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:16:57.125226    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:16:57.125314    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:16:57.146002    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:16:57.146095    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:16:57.162520    4699 logs.go:276] 4 containers: [261e141dca26 dc8e3e7ebf20 fcd3d546ebf9 038abf581477]
	I0802 11:16:57.162640    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:16:57.176365    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:16:57.176436    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:16:57.188906    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:16:57.188973    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:16:57.202047    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:16:57.202115    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:16:57.214744    4699 logs.go:276] 0 containers: []
	W0802 11:16:57.214758    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:16:57.214821    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:16:57.227871    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:16:57.227893    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:16:57.227898    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:16:57.248050    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:16:57.248065    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:16:57.261437    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:16:57.261453    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:16:57.282138    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:16:57.282153    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:16:57.295761    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:16:57.295774    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:16:57.313773    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:16:57.313782    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:16:57.329988    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:16:57.330002    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:16:57.355810    4699 logs.go:123] Gathering logs for coredns [261e141dca26] ...
	I0802 11:16:57.355819    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261e141dca26"
	I0802 11:16:57.376586    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:16:57.376602    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:16:57.395359    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:16:57.395372    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:16:57.432570    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:16:57.432579    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:16:57.437255    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:16:57.437261    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:16:57.451697    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:16:57.451708    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:16:57.486845    4699 logs.go:123] Gathering logs for coredns [dc8e3e7ebf20] ...
	I0802 11:16:57.486856    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc8e3e7ebf20"
	I0802 11:16:57.499355    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:16:57.499367    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:17:00.014747    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:17:05.016441    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:17:05.016800    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:17:05.048118    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:17:05.048239    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:17:05.065548    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:17:05.065645    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:17:05.080185    4699 logs.go:276] 4 containers: [261e141dca26 dc8e3e7ebf20 fcd3d546ebf9 038abf581477]
	I0802 11:17:05.080266    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:17:05.099295    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:17:05.099355    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:17:05.109739    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:17:05.109798    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:17:05.121589    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:17:05.121653    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:17:05.131691    4699 logs.go:276] 0 containers: []
	W0802 11:17:05.131703    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:17:05.131762    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:17:05.146895    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:17:05.146909    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:17:05.146915    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:17:05.151454    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:17:05.151462    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:17:05.186494    4699 logs.go:123] Gathering logs for coredns [fcd3d546ebf9] ...
	I0802 11:17:05.186507    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcd3d546ebf9"
	I0802 11:17:05.198264    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:17:05.198275    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:17:05.213033    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:17:05.213043    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:17:05.230951    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:17:05.230961    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:17:05.252116    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:17:05.252131    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:17:05.287866    4699 logs.go:123] Gathering logs for coredns [261e141dca26] ...
	I0802 11:17:05.287872    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261e141dca26"
	I0802 11:17:05.305154    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:17:05.305168    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:17:05.316303    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:17:05.316312    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:17:05.330278    4699 logs.go:123] Gathering logs for coredns [038abf581477] ...
	I0802 11:17:05.330290    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 038abf581477"
	I0802 11:17:05.343023    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:17:05.343033    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:17:05.357324    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:17:05.357336    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:17:05.368366    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:17:05.368379    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:17:05.391698    4699 logs.go:123] Gathering logs for coredns [dc8e3e7ebf20] ...
	I0802 11:17:05.391703    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc8e3e7ebf20"
	I0802 11:17:07.905281    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:17:12.907468    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:17:12.907534    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 11:17:12.919330    4699 logs.go:276] 1 containers: [1a6363e072d7]
	I0802 11:17:12.919397    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 11:17:12.930754    4699 logs.go:276] 1 containers: [dd18275c198f]
	I0802 11:17:12.930837    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 11:17:12.943631    4699 logs.go:276] 4 containers: [084999d0f003 a020dd825637 261e141dca26 dc8e3e7ebf20]
	I0802 11:17:12.943700    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 11:17:12.955415    4699 logs.go:276] 1 containers: [895c746d0d95]
	I0802 11:17:12.955486    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 11:17:12.966793    4699 logs.go:276] 1 containers: [c375d967186f]
	I0802 11:17:12.966863    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 11:17:12.978150    4699 logs.go:276] 1 containers: [ce34208a3700]
	I0802 11:17:12.978204    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 11:17:12.989510    4699 logs.go:276] 0 containers: []
	W0802 11:17:12.989522    4699 logs.go:278] No container was found matching "kindnet"
	I0802 11:17:12.989579    4699 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0802 11:17:13.002160    4699 logs.go:276] 1 containers: [3c7ea4440851]
	I0802 11:17:13.002178    4699 logs.go:123] Gathering logs for coredns [261e141dca26] ...
	I0802 11:17:13.002183    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261e141dca26"
	I0802 11:17:13.015795    4699 logs.go:123] Gathering logs for coredns [dc8e3e7ebf20] ...
	I0802 11:17:13.015805    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc8e3e7ebf20"
	I0802 11:17:13.027713    4699 logs.go:123] Gathering logs for kube-proxy [c375d967186f] ...
	I0802 11:17:13.027725    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c375d967186f"
	I0802 11:17:13.041881    4699 logs.go:123] Gathering logs for container status ...
	I0802 11:17:13.041894    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 11:17:13.055086    4699 logs.go:123] Gathering logs for describe nodes ...
	I0802 11:17:13.055095    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 11:17:13.093906    4699 logs.go:123] Gathering logs for kube-apiserver [1a6363e072d7] ...
	I0802 11:17:13.093921    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6363e072d7"
	I0802 11:17:13.108095    4699 logs.go:123] Gathering logs for coredns [a020dd825637] ...
	I0802 11:17:13.108104    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a020dd825637"
	I0802 11:17:13.120110    4699 logs.go:123] Gathering logs for coredns [084999d0f003] ...
	I0802 11:17:13.120124    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 084999d0f003"
	I0802 11:17:13.133420    4699 logs.go:123] Gathering logs for kube-controller-manager [ce34208a3700] ...
	I0802 11:17:13.133433    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce34208a3700"
	I0802 11:17:13.151950    4699 logs.go:123] Gathering logs for kubelet ...
	I0802 11:17:13.151959    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 11:17:13.188311    4699 logs.go:123] Gathering logs for dmesg ...
	I0802 11:17:13.188322    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 11:17:13.192939    4699 logs.go:123] Gathering logs for etcd [dd18275c198f] ...
	I0802 11:17:13.192946    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd18275c198f"
	I0802 11:17:13.207706    4699 logs.go:123] Gathering logs for kube-scheduler [895c746d0d95] ...
	I0802 11:17:13.207720    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 895c746d0d95"
	I0802 11:17:13.225148    4699 logs.go:123] Gathering logs for storage-provisioner [3c7ea4440851] ...
	I0802 11:17:13.225157    4699 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7ea4440851"
	I0802 11:17:13.237004    4699 logs.go:123] Gathering logs for Docker ...
	I0802 11:17:13.237013    4699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 11:17:15.763820    4699 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0802 11:17:20.766200    4699 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0802 11:17:20.772254    4699 out.go:177] 
	W0802 11:17:20.775295    4699 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0802 11:17:20.775331    4699 out.go:239] * 
	* 
	W0802 11:17:20.778289    4699 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:17:20.788181    4699 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-387000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (572.96s)

                                                
                                    
x
+
TestPause/serial/Start (9.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-947000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-947000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.873523083s)

                                                
                                                
-- stdout --
	* [pause-947000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-947000" primary control-plane node in "pause-947000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-947000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-947000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-947000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-947000 -n pause-947000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-947000 -n pause-947000: exit status 7 (65.844667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-947000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-965000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-965000 --driver=qemu2 : exit status 80 (9.802882125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-965000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-965000" primary control-plane node in "NoKubernetes-965000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-965000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-965000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-965000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-965000 -n NoKubernetes-965000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-965000 -n NoKubernetes-965000: exit status 7 (56.149125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-965000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-965000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-965000 --no-kubernetes --driver=qemu2 : exit status 80 (5.235748416s)

                                                
                                                
-- stdout --
	* [NoKubernetes-965000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-965000
	* Restarting existing qemu2 VM for "NoKubernetes-965000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-965000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-965000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-965000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-965000 -n NoKubernetes-965000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-965000 -n NoKubernetes-965000: exit status 7 (53.007959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-965000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-965000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-965000 --no-kubernetes --driver=qemu2 : exit status 80 (5.2315995s)

                                                
                                                
-- stdout --
	* [NoKubernetes-965000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-965000
	* Restarting existing qemu2 VM for "NoKubernetes-965000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-965000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-965000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-965000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-965000 -n NoKubernetes-965000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-965000 -n NoKubernetes-965000: exit status 7 (41.244167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-965000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-965000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-965000 --driver=qemu2 : exit status 80 (5.266620583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-965000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-965000
	* Restarting existing qemu2 VM for "NoKubernetes-965000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-965000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-965000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-965000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-965000 -n NoKubernetes-965000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-965000 -n NoKubernetes-965000: exit status 7 (58.485625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-965000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (10.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (10.026520167s)

                                                
                                                
-- stdout --
	* [auto-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-800000" primary control-plane node in "auto-800000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-800000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:15:49.422488    4943 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:15:49.422632    4943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:15:49.422635    4943 out.go:304] Setting ErrFile to fd 2...
	I0802 11:15:49.422638    4943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:15:49.422780    4943 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:15:49.423944    4943 out.go:298] Setting JSON to false
	I0802 11:15:49.440879    4943 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4513,"bootTime":1722618036,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:15:49.440957    4943 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:15:49.447193    4943 out.go:177] * [auto-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:15:49.453092    4943 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:15:49.453165    4943 notify.go:220] Checking for updates...
	I0802 11:15:49.460961    4943 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:15:49.464030    4943 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:15:49.467076    4943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:15:49.470010    4943 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:15:49.473036    4943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:15:49.476375    4943 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:15:49.476443    4943 config.go:182] Loaded profile config "stopped-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:15:49.476492    4943 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:15:49.480934    4943 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:15:49.488066    4943 start.go:297] selected driver: qemu2
	I0802 11:15:49.488074    4943 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:15:49.488081    4943 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:15:49.490397    4943 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:15:49.493997    4943 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:15:49.497180    4943 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:15:49.497206    4943 cni.go:84] Creating CNI manager for ""
	I0802 11:15:49.497219    4943 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:15:49.497224    4943 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 11:15:49.497250    4943 start.go:340] cluster config:
	{Name:auto-800000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:15:49.501045    4943 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:15:49.508014    4943 out.go:177] * Starting "auto-800000" primary control-plane node in "auto-800000" cluster
	I0802 11:15:49.512038    4943 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:15:49.512057    4943 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:15:49.512072    4943 cache.go:56] Caching tarball of preloaded images
	I0802 11:15:49.512149    4943 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:15:49.512155    4943 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:15:49.512219    4943 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/auto-800000/config.json ...
	I0802 11:15:49.512231    4943 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/auto-800000/config.json: {Name:mka30780fe12efc7ae8501b8766e99f45a3d82ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:15:49.512542    4943 start.go:360] acquireMachinesLock for auto-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:15:49.512576    4943 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "auto-800000"
	I0802 11:15:49.512585    4943 start.go:93] Provisioning new machine with config: &{Name:auto-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:15:49.512621    4943 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:15:49.520988    4943 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:15:49.538660    4943 start.go:159] libmachine.API.Create for "auto-800000" (driver="qemu2")
	I0802 11:15:49.538693    4943 client.go:168] LocalClient.Create starting
	I0802 11:15:49.538759    4943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:15:49.538790    4943 main.go:141] libmachine: Decoding PEM data...
	I0802 11:15:49.538800    4943 main.go:141] libmachine: Parsing certificate...
	I0802 11:15:49.538855    4943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:15:49.538888    4943 main.go:141] libmachine: Decoding PEM data...
	I0802 11:15:49.538898    4943 main.go:141] libmachine: Parsing certificate...
	I0802 11:15:49.539266    4943 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:15:49.691499    4943 main.go:141] libmachine: Creating SSH key...
	I0802 11:15:49.843005    4943 main.go:141] libmachine: Creating Disk image...
	I0802 11:15:49.843014    4943 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:15:49.843216    4943 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/disk.qcow2
	I0802 11:15:49.853212    4943 main.go:141] libmachine: STDOUT: 
	I0802 11:15:49.853240    4943 main.go:141] libmachine: STDERR: 
	I0802 11:15:49.853303    4943 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/disk.qcow2 +20000M
	I0802 11:15:49.861851    4943 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:15:49.861872    4943 main.go:141] libmachine: STDERR: 
	I0802 11:15:49.861890    4943 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/disk.qcow2
	I0802 11:15:49.861893    4943 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:15:49.861905    4943 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:15:49.861936    4943 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:7a:d1:e6:7c:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/disk.qcow2
	I0802 11:15:49.863676    4943 main.go:141] libmachine: STDOUT: 
	I0802 11:15:49.863695    4943 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:15:49.863713    4943 client.go:171] duration metric: took 325.024375ms to LocalClient.Create
	I0802 11:15:51.865887    4943 start.go:128] duration metric: took 2.353315625s to createHost
	I0802 11:15:51.865975    4943 start.go:83] releasing machines lock for "auto-800000", held for 2.353473541s
	W0802 11:15:51.866048    4943 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:15:51.878389    4943 out.go:177] * Deleting "auto-800000" in qemu2 ...
	W0802 11:15:51.909887    4943 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:15:51.909914    4943 start.go:729] Will try again in 5 seconds ...
	I0802 11:15:56.911875    4943 start.go:360] acquireMachinesLock for auto-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:15:56.912089    4943 start.go:364] duration metric: took 158.75µs to acquireMachinesLock for "auto-800000"
	I0802 11:15:56.912136    4943 start.go:93] Provisioning new machine with config: &{Name:auto-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:15:56.912231    4943 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:15:56.916510    4943 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:15:56.942393    4943 start.go:159] libmachine.API.Create for "auto-800000" (driver="qemu2")
	I0802 11:15:56.942443    4943 client.go:168] LocalClient.Create starting
	I0802 11:15:56.942512    4943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:15:56.942562    4943 main.go:141] libmachine: Decoding PEM data...
	I0802 11:15:56.942574    4943 main.go:141] libmachine: Parsing certificate...
	I0802 11:15:56.942616    4943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:15:56.942646    4943 main.go:141] libmachine: Decoding PEM data...
	I0802 11:15:56.942657    4943 main.go:141] libmachine: Parsing certificate...
	I0802 11:15:56.942995    4943 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:15:57.096690    4943 main.go:141] libmachine: Creating SSH key...
	I0802 11:15:57.357291    4943 main.go:141] libmachine: Creating Disk image...
	I0802 11:15:57.357304    4943 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:15:57.357524    4943 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/disk.qcow2
	I0802 11:15:57.367187    4943 main.go:141] libmachine: STDOUT: 
	I0802 11:15:57.367208    4943 main.go:141] libmachine: STDERR: 
	I0802 11:15:57.367283    4943 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/disk.qcow2 +20000M
	I0802 11:15:57.375191    4943 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:15:57.375205    4943 main.go:141] libmachine: STDERR: 
	I0802 11:15:57.375218    4943 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/disk.qcow2
	I0802 11:15:57.375221    4943 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:15:57.375243    4943 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:15:57.375272    4943 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:fe:0a:fb:a3:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/auto-800000/disk.qcow2
	I0802 11:15:57.376896    4943 main.go:141] libmachine: STDOUT: 
	I0802 11:15:57.376912    4943 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:15:57.376925    4943 client.go:171] duration metric: took 434.494ms to LocalClient.Create
	I0802 11:15:59.379054    4943 start.go:128] duration metric: took 2.466883833s to createHost
	I0802 11:15:59.379150    4943 start.go:83] releasing machines lock for "auto-800000", held for 2.4671355s
	W0802 11:15:59.379586    4943 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:15:59.389088    4943 out.go:177] 
	W0802 11:15:59.396305    4943 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:15:59.396329    4943 out.go:239] * 
	* 
	W0802 11:15:59.399133    4943 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:15:59.408209    4943 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (10.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.708566125s)

                                                
                                                
-- stdout --
	* [kindnet-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-800000" primary control-plane node in "kindnet-800000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-800000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:16:01.552040    5055 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:16:01.552170    5055 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:16:01.552173    5055 out.go:304] Setting ErrFile to fd 2...
	I0802 11:16:01.552176    5055 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:16:01.552308    5055 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:16:01.553363    5055 out.go:298] Setting JSON to false
	I0802 11:16:01.570206    5055 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4525,"bootTime":1722618036,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:16:01.570284    5055 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:16:01.577502    5055 out.go:177] * [kindnet-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:16:01.585440    5055 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:16:01.585490    5055 notify.go:220] Checking for updates...
	I0802 11:16:01.590676    5055 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:16:01.593349    5055 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:16:01.596393    5055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:16:01.599408    5055 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:16:01.605484    5055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:16:01.608722    5055 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:16:01.608796    5055 config.go:182] Loaded profile config "stopped-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:16:01.608846    5055 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:16:01.612377    5055 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:16:01.619393    5055 start.go:297] selected driver: qemu2
	I0802 11:16:01.619400    5055 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:16:01.619405    5055 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:16:01.621824    5055 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:16:01.624347    5055 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:16:01.627453    5055 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:16:01.627475    5055 cni.go:84] Creating CNI manager for "kindnet"
	I0802 11:16:01.627478    5055 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0802 11:16:01.627509    5055 start.go:340] cluster config:
	{Name:kindnet-800000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:16:01.631265    5055 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:16:01.638231    5055 out.go:177] * Starting "kindnet-800000" primary control-plane node in "kindnet-800000" cluster
	I0802 11:16:01.642329    5055 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:16:01.642345    5055 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:16:01.642355    5055 cache.go:56] Caching tarball of preloaded images
	I0802 11:16:01.642406    5055 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:16:01.642411    5055 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:16:01.642469    5055 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/kindnet-800000/config.json ...
	I0802 11:16:01.642480    5055 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/kindnet-800000/config.json: {Name:mke58db1348bbab0756edc696f90ad96b504c590 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:16:01.642785    5055 start.go:360] acquireMachinesLock for kindnet-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:16:01.642824    5055 start.go:364] duration metric: took 32.542µs to acquireMachinesLock for "kindnet-800000"
	I0802 11:16:01.642834    5055 start.go:93] Provisioning new machine with config: &{Name:kindnet-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:16:01.642859    5055 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:16:01.646349    5055 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:16:01.662637    5055 start.go:159] libmachine.API.Create for "kindnet-800000" (driver="qemu2")
	I0802 11:16:01.662670    5055 client.go:168] LocalClient.Create starting
	I0802 11:16:01.662739    5055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:16:01.662771    5055 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:01.662781    5055 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:01.662817    5055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:16:01.662841    5055 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:01.662854    5055 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:01.663285    5055 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:16:01.815410    5055 main.go:141] libmachine: Creating SSH key...
	I0802 11:16:01.864686    5055 main.go:141] libmachine: Creating Disk image...
	I0802 11:16:01.864691    5055 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:16:01.864879    5055 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/disk.qcow2
	I0802 11:16:01.874187    5055 main.go:141] libmachine: STDOUT: 
	I0802 11:16:01.874206    5055 main.go:141] libmachine: STDERR: 
	I0802 11:16:01.874270    5055 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/disk.qcow2 +20000M
	I0802 11:16:01.882111    5055 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:16:01.882125    5055 main.go:141] libmachine: STDERR: 
	I0802 11:16:01.882152    5055 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/disk.qcow2
	I0802 11:16:01.882156    5055 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:16:01.882171    5055 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:16:01.882193    5055 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:0b:6b:b7:48:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/disk.qcow2
	I0802 11:16:01.883841    5055 main.go:141] libmachine: STDOUT: 
	I0802 11:16:01.883853    5055 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:16:01.883870    5055 client.go:171] duration metric: took 221.202917ms to LocalClient.Create
	I0802 11:16:03.886046    5055 start.go:128] duration metric: took 2.243239875s to createHost
	I0802 11:16:03.886119    5055 start.go:83] releasing machines lock for "kindnet-800000", held for 2.243364667s
	W0802 11:16:03.886231    5055 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:16:03.892490    5055 out.go:177] * Deleting "kindnet-800000" in qemu2 ...
	W0802 11:16:03.924325    5055 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:16:03.924359    5055 start.go:729] Will try again in 5 seconds ...
	I0802 11:16:08.926292    5055 start.go:360] acquireMachinesLock for kindnet-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:16:08.926532    5055 start.go:364] duration metric: took 202.917µs to acquireMachinesLock for "kindnet-800000"
	I0802 11:16:08.926584    5055 start.go:93] Provisioning new machine with config: &{Name:kindnet-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:16:08.926691    5055 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:16:08.932957    5055 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:16:08.955306    5055 start.go:159] libmachine.API.Create for "kindnet-800000" (driver="qemu2")
	I0802 11:16:08.955341    5055 client.go:168] LocalClient.Create starting
	I0802 11:16:08.955417    5055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:16:08.955462    5055 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:08.955474    5055 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:08.955523    5055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:16:08.955551    5055 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:08.955557    5055 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:08.955877    5055 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:16:09.108043    5055 main.go:141] libmachine: Creating SSH key...
	I0802 11:16:09.172407    5055 main.go:141] libmachine: Creating Disk image...
	I0802 11:16:09.172418    5055 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:16:09.172605    5055 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/disk.qcow2
	I0802 11:16:09.182039    5055 main.go:141] libmachine: STDOUT: 
	I0802 11:16:09.182055    5055 main.go:141] libmachine: STDERR: 
	I0802 11:16:09.182104    5055 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/disk.qcow2 +20000M
	I0802 11:16:09.190233    5055 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:16:09.190252    5055 main.go:141] libmachine: STDERR: 
	I0802 11:16:09.190265    5055 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/disk.qcow2
	I0802 11:16:09.190271    5055 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:16:09.190283    5055 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:16:09.190315    5055 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:a4:93:18:3d:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kindnet-800000/disk.qcow2
	I0802 11:16:09.192129    5055 main.go:141] libmachine: STDOUT: 
	I0802 11:16:09.192148    5055 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:16:09.192163    5055 client.go:171] duration metric: took 236.824667ms to LocalClient.Create
	I0802 11:16:11.194293    5055 start.go:128] duration metric: took 2.267649125s to createHost
	I0802 11:16:11.194372    5055 start.go:83] releasing machines lock for "kindnet-800000", held for 2.267907542s
	W0802 11:16:11.194699    5055 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:16:11.204251    5055 out.go:177] 
	W0802 11:16:11.208286    5055 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:16:11.208307    5055 out.go:239] * 
	* 
	W0802 11:16:11.209998    5055 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:16:11.220055    5055 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.887073625s)

                                                
                                                
-- stdout --
	* [calico-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-800000" primary control-plane node in "calico-800000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-800000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:16:13.440739    5168 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:16:13.440859    5168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:16:13.440862    5168 out.go:304] Setting ErrFile to fd 2...
	I0802 11:16:13.440865    5168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:16:13.440977    5168 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:16:13.442085    5168 out.go:298] Setting JSON to false
	I0802 11:16:13.458635    5168 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4537,"bootTime":1722618036,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:16:13.458701    5168 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:16:13.465128    5168 out.go:177] * [calico-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:16:13.472116    5168 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:16:13.472225    5168 notify.go:220] Checking for updates...
	I0802 11:16:13.479084    5168 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:16:13.482017    5168 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:16:13.485100    5168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:16:13.488089    5168 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:16:13.489393    5168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:16:13.492369    5168 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:16:13.492445    5168 config.go:182] Loaded profile config "stopped-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:16:13.492505    5168 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:16:13.497033    5168 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:16:13.502102    5168 start.go:297] selected driver: qemu2
	I0802 11:16:13.502110    5168 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:16:13.502121    5168 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:16:13.504474    5168 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:16:13.507145    5168 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:16:13.510215    5168 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:16:13.510236    5168 cni.go:84] Creating CNI manager for "calico"
	I0802 11:16:13.510240    5168 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0802 11:16:13.510277    5168 start.go:340] cluster config:
	{Name:calico-800000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:16:13.513871    5168 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:16:13.521112    5168 out.go:177] * Starting "calico-800000" primary control-plane node in "calico-800000" cluster
	I0802 11:16:13.525013    5168 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:16:13.525035    5168 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:16:13.525044    5168 cache.go:56] Caching tarball of preloaded images
	I0802 11:16:13.525103    5168 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:16:13.525109    5168 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:16:13.525157    5168 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/calico-800000/config.json ...
	I0802 11:16:13.525168    5168 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/calico-800000/config.json: {Name:mk9c152029aa4cec25736a3254ae5a76ed69965e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:16:13.525508    5168 start.go:360] acquireMachinesLock for calico-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:16:13.525541    5168 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "calico-800000"
	I0802 11:16:13.525552    5168 start.go:93] Provisioning new machine with config: &{Name:calico-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:16:13.525585    5168 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:16:13.529129    5168 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:16:13.545810    5168 start.go:159] libmachine.API.Create for "calico-800000" (driver="qemu2")
	I0802 11:16:13.545833    5168 client.go:168] LocalClient.Create starting
	I0802 11:16:13.545894    5168 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:16:13.545926    5168 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:13.545933    5168 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:13.545974    5168 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:16:13.546002    5168 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:13.546010    5168 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:13.546496    5168 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:16:13.698523    5168 main.go:141] libmachine: Creating SSH key...
	I0802 11:16:13.837289    5168 main.go:141] libmachine: Creating Disk image...
	I0802 11:16:13.837300    5168 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:16:13.837524    5168 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/disk.qcow2
	I0802 11:16:13.846811    5168 main.go:141] libmachine: STDOUT: 
	I0802 11:16:13.846829    5168 main.go:141] libmachine: STDERR: 
	I0802 11:16:13.846876    5168 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/disk.qcow2 +20000M
	I0802 11:16:13.854987    5168 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:16:13.855002    5168 main.go:141] libmachine: STDERR: 
	I0802 11:16:13.855017    5168 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/disk.qcow2
	I0802 11:16:13.855021    5168 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:16:13.855035    5168 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:16:13.855071    5168 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:e2:4e:da:dd:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/disk.qcow2
	I0802 11:16:13.856687    5168 main.go:141] libmachine: STDOUT: 
	I0802 11:16:13.856704    5168 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:16:13.856721    5168 client.go:171] duration metric: took 310.89525ms to LocalClient.Create
	I0802 11:16:15.858757    5168 start.go:128] duration metric: took 2.333238625s to createHost
	I0802 11:16:15.858793    5168 start.go:83] releasing machines lock for "calico-800000", held for 2.333325625s
	W0802 11:16:15.858846    5168 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:16:15.864691    5168 out.go:177] * Deleting "calico-800000" in qemu2 ...
	W0802 11:16:15.884213    5168 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:16:15.884221    5168 start.go:729] Will try again in 5 seconds ...
	I0802 11:16:20.886233    5168 start.go:360] acquireMachinesLock for calico-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:16:20.886720    5168 start.go:364] duration metric: took 381.875µs to acquireMachinesLock for "calico-800000"
	I0802 11:16:20.886785    5168 start.go:93] Provisioning new machine with config: &{Name:calico-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:16:20.887080    5168 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:16:20.897774    5168 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:16:20.948281    5168 start.go:159] libmachine.API.Create for "calico-800000" (driver="qemu2")
	I0802 11:16:20.948355    5168 client.go:168] LocalClient.Create starting
	I0802 11:16:20.948476    5168 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:16:20.948539    5168 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:20.948567    5168 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:20.948630    5168 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:16:20.948675    5168 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:20.948694    5168 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:20.949429    5168 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:16:21.112080    5168 main.go:141] libmachine: Creating SSH key...
	I0802 11:16:21.234694    5168 main.go:141] libmachine: Creating Disk image...
	I0802 11:16:21.234700    5168 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:16:21.234888    5168 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/disk.qcow2
	I0802 11:16:21.244620    5168 main.go:141] libmachine: STDOUT: 
	I0802 11:16:21.244644    5168 main.go:141] libmachine: STDERR: 
	I0802 11:16:21.244709    5168 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/disk.qcow2 +20000M
	I0802 11:16:21.252969    5168 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:16:21.252986    5168 main.go:141] libmachine: STDERR: 
	I0802 11:16:21.252999    5168 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/disk.qcow2
	I0802 11:16:21.253003    5168 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:16:21.253013    5168 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:16:21.253045    5168 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:82:2d:2b:8f:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/calico-800000/disk.qcow2
	I0802 11:16:21.254745    5168 main.go:141] libmachine: STDOUT: 
	I0802 11:16:21.254760    5168 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:16:21.254772    5168 client.go:171] duration metric: took 306.423167ms to LocalClient.Create
	I0802 11:16:23.256829    5168 start.go:128] duration metric: took 2.369784542s to createHost
	I0802 11:16:23.256880    5168 start.go:83] releasing machines lock for "calico-800000", held for 2.370214792s
	W0802 11:16:23.257149    5168 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:16:23.269600    5168 out.go:177] 
	W0802 11:16:23.273791    5168 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:16:23.273827    5168 out.go:239] * 
	* 
	W0802 11:16:23.275104    5168 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:16:23.287646    5168 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.748353209s)

                                                
                                                
-- stdout --
	* [custom-flannel-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-800000" primary control-plane node in "custom-flannel-800000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-800000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:16:25.706403    5287 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:16:25.706545    5287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:16:25.706553    5287 out.go:304] Setting ErrFile to fd 2...
	I0802 11:16:25.706555    5287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:16:25.706701    5287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:16:25.707903    5287 out.go:298] Setting JSON to false
	I0802 11:16:25.726615    5287 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4549,"bootTime":1722618036,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:16:25.726698    5287 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:16:25.732531    5287 out.go:177] * [custom-flannel-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:16:25.738593    5287 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:16:25.738633    5287 notify.go:220] Checking for updates...
	I0802 11:16:25.746464    5287 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:16:25.756579    5287 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:16:25.764510    5287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:16:25.768605    5287 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:16:25.773551    5287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:16:25.777834    5287 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:16:25.777901    5287 config.go:182] Loaded profile config "stopped-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:16:25.777954    5287 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:16:25.782546    5287 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:16:25.789513    5287 start.go:297] selected driver: qemu2
	I0802 11:16:25.789519    5287 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:16:25.789524    5287 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:16:25.791884    5287 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:16:25.794455    5287 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:16:25.798617    5287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:16:25.798633    5287 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0802 11:16:25.798651    5287 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0802 11:16:25.798689    5287 start.go:340] cluster config:
	{Name:custom-flannel-800000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:16:25.802762    5287 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:16:25.809484    5287 out.go:177] * Starting "custom-flannel-800000" primary control-plane node in "custom-flannel-800000" cluster
	I0802 11:16:25.813534    5287 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:16:25.813578    5287 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:16:25.813591    5287 cache.go:56] Caching tarball of preloaded images
	I0802 11:16:25.813677    5287 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:16:25.813684    5287 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:16:25.813750    5287 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/custom-flannel-800000/config.json ...
	I0802 11:16:25.813761    5287 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/custom-flannel-800000/config.json: {Name:mke268eebd5d44172f6b6b4bfeef22366fa52e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:16:25.814045    5287 start.go:360] acquireMachinesLock for custom-flannel-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:16:25.814082    5287 start.go:364] duration metric: took 27.834µs to acquireMachinesLock for "custom-flannel-800000"
	I0802 11:16:25.814092    5287 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:16:25.814126    5287 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:16:25.818491    5287 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:16:25.834755    5287 start.go:159] libmachine.API.Create for "custom-flannel-800000" (driver="qemu2")
	I0802 11:16:25.834791    5287 client.go:168] LocalClient.Create starting
	I0802 11:16:25.834882    5287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:16:25.834917    5287 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:25.834927    5287 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:25.834977    5287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:16:25.835004    5287 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:25.835017    5287 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:25.835382    5287 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:16:25.985083    5287 main.go:141] libmachine: Creating SSH key...
	I0802 11:16:26.052901    5287 main.go:141] libmachine: Creating Disk image...
	I0802 11:16:26.052909    5287 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:16:26.053107    5287 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/disk.qcow2
	I0802 11:16:26.062351    5287 main.go:141] libmachine: STDOUT: 
	I0802 11:16:26.062377    5287 main.go:141] libmachine: STDERR: 
	I0802 11:16:26.062434    5287 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/disk.qcow2 +20000M
	I0802 11:16:26.070315    5287 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:16:26.070329    5287 main.go:141] libmachine: STDERR: 
	I0802 11:16:26.070347    5287 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/disk.qcow2
	I0802 11:16:26.070351    5287 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:16:26.070363    5287 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:16:26.070394    5287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:62:50:e1:03:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/disk.qcow2
	I0802 11:16:26.072020    5287 main.go:141] libmachine: STDOUT: 
	I0802 11:16:26.072035    5287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:16:26.072061    5287 client.go:171] duration metric: took 237.271792ms to LocalClient.Create
	I0802 11:16:28.074259    5287 start.go:128] duration metric: took 2.260186959s to createHost
	I0802 11:16:28.074328    5287 start.go:83] releasing machines lock for "custom-flannel-800000", held for 2.260316959s
	W0802 11:16:28.074404    5287 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:16:28.089684    5287 out.go:177] * Deleting "custom-flannel-800000" in qemu2 ...
	W0802 11:16:28.116762    5287 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:16:28.116788    5287 start.go:729] Will try again in 5 seconds ...
	I0802 11:16:33.118753    5287 start.go:360] acquireMachinesLock for custom-flannel-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:16:33.118994    5287 start.go:364] duration metric: took 192.125µs to acquireMachinesLock for "custom-flannel-800000"
	I0802 11:16:33.119056    5287 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:16:33.119164    5287 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:16:33.126480    5287 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:16:33.151199    5287 start.go:159] libmachine.API.Create for "custom-flannel-800000" (driver="qemu2")
	I0802 11:16:33.151239    5287 client.go:168] LocalClient.Create starting
	I0802 11:16:33.151334    5287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:16:33.151376    5287 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:33.151387    5287 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:33.151430    5287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:16:33.151457    5287 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:33.151463    5287 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:33.151799    5287 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:16:33.303999    5287 main.go:141] libmachine: Creating SSH key...
	I0802 11:16:33.366971    5287 main.go:141] libmachine: Creating Disk image...
	I0802 11:16:33.366981    5287 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:16:33.367159    5287 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/disk.qcow2
	I0802 11:16:33.376472    5287 main.go:141] libmachine: STDOUT: 
	I0802 11:16:33.376489    5287 main.go:141] libmachine: STDERR: 
	I0802 11:16:33.376552    5287 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/disk.qcow2 +20000M
	I0802 11:16:33.384607    5287 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:16:33.384621    5287 main.go:141] libmachine: STDERR: 
	I0802 11:16:33.384632    5287 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/disk.qcow2
	I0802 11:16:33.384639    5287 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:16:33.384648    5287 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:16:33.384677    5287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:50:23:4c:bd:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/custom-flannel-800000/disk.qcow2
	I0802 11:16:33.386334    5287 main.go:141] libmachine: STDOUT: 
	I0802 11:16:33.386350    5287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:16:33.386364    5287 client.go:171] duration metric: took 235.128042ms to LocalClient.Create
	I0802 11:16:35.388495    5287 start.go:128] duration metric: took 2.269367333s to createHost
	I0802 11:16:35.388563    5287 start.go:83] releasing machines lock for "custom-flannel-800000", held for 2.2696365s
	W0802 11:16:35.389031    5287 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:16:35.397587    5287 out.go:177] 
	W0802 11:16:35.401632    5287 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:16:35.401668    5287 out.go:239] * 
	* 
	W0802 11:16:35.404144    5287 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:16:35.416504    5287 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.761201834s)

                                                
                                                
-- stdout --
	* [false-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-800000" primary control-plane node in "false-800000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-800000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:16:37.786782    5405 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:16:37.786952    5405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:16:37.786955    5405 out.go:304] Setting ErrFile to fd 2...
	I0802 11:16:37.786957    5405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:16:37.787082    5405 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:16:37.788203    5405 out.go:298] Setting JSON to false
	I0802 11:16:37.804315    5405 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4561,"bootTime":1722618036,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:16:37.804394    5405 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:16:37.811263    5405 out.go:177] * [false-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:16:37.819263    5405 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:16:37.819321    5405 notify.go:220] Checking for updates...
	I0802 11:16:37.827214    5405 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:16:37.828685    5405 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:16:37.833269    5405 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:16:37.836321    5405 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:16:37.837606    5405 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:16:37.840514    5405 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:16:37.840598    5405 config.go:182] Loaded profile config "stopped-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:16:37.840645    5405 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:16:37.845240    5405 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:16:37.852188    5405 start.go:297] selected driver: qemu2
	I0802 11:16:37.852193    5405 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:16:37.852199    5405 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:16:37.854308    5405 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:16:37.858279    5405 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:16:37.861188    5405 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:16:37.861205    5405 cni.go:84] Creating CNI manager for "false"
	I0802 11:16:37.861227    5405 start.go:340] cluster config:
	{Name:false-800000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:16:37.864820    5405 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:16:37.873297    5405 out.go:177] * Starting "false-800000" primary control-plane node in "false-800000" cluster
	I0802 11:16:37.877168    5405 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:16:37.877182    5405 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:16:37.877194    5405 cache.go:56] Caching tarball of preloaded images
	I0802 11:16:37.877247    5405 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:16:37.877252    5405 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:16:37.877307    5405 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/false-800000/config.json ...
	I0802 11:16:37.877318    5405 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/false-800000/config.json: {Name:mka5a07ea0dc4a23981e5e94fc19c18e518f9785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:16:37.877560    5405 start.go:360] acquireMachinesLock for false-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:16:37.877602    5405 start.go:364] duration metric: took 35.292µs to acquireMachinesLock for "false-800000"
	I0802 11:16:37.877613    5405 start.go:93] Provisioning new machine with config: &{Name:false-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:16:37.877639    5405 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:16:37.886206    5405 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:16:37.902463    5405 start.go:159] libmachine.API.Create for "false-800000" (driver="qemu2")
	I0802 11:16:37.902483    5405 client.go:168] LocalClient.Create starting
	I0802 11:16:37.902539    5405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:16:37.902576    5405 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:37.902586    5405 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:37.902630    5405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:16:37.902653    5405 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:37.902663    5405 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:37.903084    5405 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:16:38.053342    5405 main.go:141] libmachine: Creating SSH key...
	I0802 11:16:38.128311    5405 main.go:141] libmachine: Creating Disk image...
	I0802 11:16:38.128325    5405 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:16:38.128537    5405 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/disk.qcow2
	I0802 11:16:38.137856    5405 main.go:141] libmachine: STDOUT: 
	I0802 11:16:38.137877    5405 main.go:141] libmachine: STDERR: 
	I0802 11:16:38.137931    5405 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/disk.qcow2 +20000M
	I0802 11:16:38.145885    5405 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:16:38.145899    5405 main.go:141] libmachine: STDERR: 
	I0802 11:16:38.145908    5405 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/disk.qcow2
	I0802 11:16:38.145912    5405 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:16:38.145926    5405 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:16:38.145957    5405 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e4:1c:ae:62:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/disk.qcow2
	I0802 11:16:38.147693    5405 main.go:141] libmachine: STDOUT: 
	I0802 11:16:38.147716    5405 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:16:38.147739    5405 client.go:171] duration metric: took 245.259833ms to LocalClient.Create
	I0802 11:16:40.149980    5405 start.go:128] duration metric: took 2.272372208s to createHost
	I0802 11:16:40.150103    5405 start.go:83] releasing machines lock for "false-800000", held for 2.272571958s
	W0802 11:16:40.150184    5405 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:16:40.157507    5405 out.go:177] * Deleting "false-800000" in qemu2 ...
	W0802 11:16:40.193959    5405 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:16:40.194001    5405 start.go:729] Will try again in 5 seconds ...
	I0802 11:16:45.196121    5405 start.go:360] acquireMachinesLock for false-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:16:45.196515    5405 start.go:364] duration metric: took 306.833µs to acquireMachinesLock for "false-800000"
	I0802 11:16:45.196599    5405 start.go:93] Provisioning new machine with config: &{Name:false-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:16:45.196949    5405 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:16:45.206408    5405 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:16:45.240718    5405 start.go:159] libmachine.API.Create for "false-800000" (driver="qemu2")
	I0802 11:16:45.240755    5405 client.go:168] LocalClient.Create starting
	I0802 11:16:45.240850    5405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:16:45.240905    5405 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:45.240917    5405 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:45.240977    5405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:16:45.241017    5405 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:45.241030    5405 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:45.241651    5405 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:16:45.396483    5405 main.go:141] libmachine: Creating SSH key...
	I0802 11:16:45.457465    5405 main.go:141] libmachine: Creating Disk image...
	I0802 11:16:45.457473    5405 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:16:45.457672    5405 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/disk.qcow2
	I0802 11:16:45.467167    5405 main.go:141] libmachine: STDOUT: 
	I0802 11:16:45.467188    5405 main.go:141] libmachine: STDERR: 
	I0802 11:16:45.467245    5405 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/disk.qcow2 +20000M
	I0802 11:16:45.475261    5405 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:16:45.475276    5405 main.go:141] libmachine: STDERR: 
	I0802 11:16:45.475298    5405 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/disk.qcow2
	I0802 11:16:45.475302    5405 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:16:45.475309    5405 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:16:45.475353    5405 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:2f:2f:25:2a:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/false-800000/disk.qcow2
	I0802 11:16:45.477044    5405 main.go:141] libmachine: STDOUT: 
	I0802 11:16:45.477058    5405 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:16:45.477075    5405 client.go:171] duration metric: took 236.4025ms to LocalClient.Create
	I0802 11:16:47.478467    5405 start.go:128] duration metric: took 2.282310792s to createHost
	I0802 11:16:47.478495    5405 start.go:83] releasing machines lock for "false-800000", held for 2.282787125s
	W0802 11:16:47.478730    5405 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:16:47.494101    5405 out.go:177] 
	W0802 11:16:47.497181    5405 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:16:47.497197    5405 out.go:239] * 
	* 
	W0802 11:16:47.498445    5405 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:16:47.510150    5405 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.774727s)

                                                
                                                
-- stdout --
	* [enable-default-cni-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-800000" primary control-plane node in "enable-default-cni-800000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-800000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:16:49.721584    5517 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:16:49.721706    5517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:16:49.721709    5517 out.go:304] Setting ErrFile to fd 2...
	I0802 11:16:49.721711    5517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:16:49.721838    5517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:16:49.722922    5517 out.go:298] Setting JSON to false
	I0802 11:16:49.739727    5517 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4573,"bootTime":1722618036,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:16:49.739799    5517 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:16:49.746908    5517 out.go:177] * [enable-default-cni-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:16:49.754779    5517 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:16:49.754823    5517 notify.go:220] Checking for updates...
	I0802 11:16:49.762769    5517 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:16:49.765795    5517 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:16:49.768691    5517 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:16:49.771772    5517 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:16:49.774804    5517 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:16:49.776604    5517 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:16:49.776674    5517 config.go:182] Loaded profile config "stopped-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:16:49.776723    5517 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:16:49.780770    5517 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:16:49.787656    5517 start.go:297] selected driver: qemu2
	I0802 11:16:49.787662    5517 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:16:49.787668    5517 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:16:49.789884    5517 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:16:49.793785    5517 out.go:177] * Automatically selected the socket_vmnet network
	E0802 11:16:49.796834    5517 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0802 11:16:49.796846    5517 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:16:49.796860    5517 cni.go:84] Creating CNI manager for "bridge"
	I0802 11:16:49.796877    5517 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 11:16:49.796902    5517 start.go:340] cluster config:
	{Name:enable-default-cni-800000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:16:49.800420    5517 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:16:49.808772    5517 out.go:177] * Starting "enable-default-cni-800000" primary control-plane node in "enable-default-cni-800000" cluster
	I0802 11:16:49.812830    5517 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:16:49.812846    5517 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:16:49.812855    5517 cache.go:56] Caching tarball of preloaded images
	I0802 11:16:49.812921    5517 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:16:49.812926    5517 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:16:49.812998    5517 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/enable-default-cni-800000/config.json ...
	I0802 11:16:49.813009    5517 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/enable-default-cni-800000/config.json: {Name:mk67280d332eb616303e4355e585caa2f2a21974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:16:49.813320    5517 start.go:360] acquireMachinesLock for enable-default-cni-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:16:49.813357    5517 start.go:364] duration metric: took 25.792µs to acquireMachinesLock for "enable-default-cni-800000"
	I0802 11:16:49.813368    5517 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:16:49.813403    5517 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:16:49.817824    5517 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:16:49.834167    5517 start.go:159] libmachine.API.Create for "enable-default-cni-800000" (driver="qemu2")
	I0802 11:16:49.834195    5517 client.go:168] LocalClient.Create starting
	I0802 11:16:49.834255    5517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:16:49.834287    5517 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:49.834295    5517 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:49.834338    5517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:16:49.834360    5517 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:49.834367    5517 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:49.834791    5517 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:16:49.988355    5517 main.go:141] libmachine: Creating SSH key...
	I0802 11:16:50.042389    5517 main.go:141] libmachine: Creating Disk image...
	I0802 11:16:50.042400    5517 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:16:50.042605    5517 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/disk.qcow2
	I0802 11:16:50.051854    5517 main.go:141] libmachine: STDOUT: 
	I0802 11:16:50.051877    5517 main.go:141] libmachine: STDERR: 
	I0802 11:16:50.051937    5517 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/disk.qcow2 +20000M
	I0802 11:16:50.059863    5517 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:16:50.059877    5517 main.go:141] libmachine: STDERR: 
	I0802 11:16:50.059892    5517 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/disk.qcow2
	I0802 11:16:50.059899    5517 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:16:50.059911    5517 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:16:50.059953    5517 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:c8:5a:b1:9e:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/disk.qcow2
	I0802 11:16:50.061618    5517 main.go:141] libmachine: STDOUT: 
	I0802 11:16:50.061632    5517 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:16:50.061653    5517 client.go:171] duration metric: took 227.522875ms to LocalClient.Create
	I0802 11:16:52.063319    5517 start.go:128] duration metric: took 2.250518708s to createHost
	I0802 11:16:52.063453    5517 start.go:83] releasing machines lock for "enable-default-cni-800000", held for 2.250706375s
	W0802 11:16:52.063538    5517 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:16:52.076002    5517 out.go:177] * Deleting "enable-default-cni-800000" in qemu2 ...
	W0802 11:16:52.105699    5517 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:16:52.105731    5517 start.go:729] Will try again in 5 seconds ...
	I0802 11:16:57.106704    5517 start.go:360] acquireMachinesLock for enable-default-cni-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:16:57.107006    5517 start.go:364] duration metric: took 253.917µs to acquireMachinesLock for "enable-default-cni-800000"
	I0802 11:16:57.107048    5517 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:16:57.107174    5517 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:16:57.114510    5517 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:16:57.146275    5517 start.go:159] libmachine.API.Create for "enable-default-cni-800000" (driver="qemu2")
	I0802 11:16:57.146309    5517 client.go:168] LocalClient.Create starting
	I0802 11:16:57.146419    5517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:16:57.146486    5517 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:57.146501    5517 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:57.146553    5517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:16:57.146588    5517 main.go:141] libmachine: Decoding PEM data...
	I0802 11:16:57.146598    5517 main.go:141] libmachine: Parsing certificate...
	I0802 11:16:57.147001    5517 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:16:57.302323    5517 main.go:141] libmachine: Creating SSH key...
	I0802 11:16:57.402951    5517 main.go:141] libmachine: Creating Disk image...
	I0802 11:16:57.402960    5517 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:16:57.403185    5517 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/disk.qcow2
	I0802 11:16:57.413205    5517 main.go:141] libmachine: STDOUT: 
	I0802 11:16:57.413227    5517 main.go:141] libmachine: STDERR: 
	I0802 11:16:57.413284    5517 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/disk.qcow2 +20000M
	I0802 11:16:57.422271    5517 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:16:57.422286    5517 main.go:141] libmachine: STDERR: 
	I0802 11:16:57.422295    5517 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/disk.qcow2
	I0802 11:16:57.422299    5517 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:16:57.422316    5517 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:16:57.422341    5517 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:ec:4c:53:51:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/enable-default-cni-800000/disk.qcow2
	I0802 11:16:57.424070    5517 main.go:141] libmachine: STDOUT: 
	I0802 11:16:57.424092    5517 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:16:57.424105    5517 client.go:171] duration metric: took 277.843167ms to LocalClient.Create
	I0802 11:16:59.425942    5517 start.go:128] duration metric: took 2.319131292s to createHost
	I0802 11:16:59.425989    5517 start.go:83] releasing machines lock for "enable-default-cni-800000", held for 2.319405625s
	W0802 11:16:59.426143    5517 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:16:59.435536    5517 out.go:177] 
	W0802 11:16:59.445775    5517 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:16:59.445791    5517 out.go:239] * 
	* 
	W0802 11:16:59.446682    5517 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:16:59.456584    5517 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.907175709s)

                                                
                                                
-- stdout --
	* [flannel-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-800000" primary control-plane node in "flannel-800000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-800000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:17:01.622776    5626 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:17:01.622918    5626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:17:01.622922    5626 out.go:304] Setting ErrFile to fd 2...
	I0802 11:17:01.622924    5626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:17:01.623062    5626 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:17:01.624129    5626 out.go:298] Setting JSON to false
	I0802 11:17:01.640293    5626 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4585,"bootTime":1722618036,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:17:01.640397    5626 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:17:01.646171    5626 out.go:177] * [flannel-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:17:01.652216    5626 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:17:01.652284    5626 notify.go:220] Checking for updates...
	I0802 11:17:01.660116    5626 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:17:01.664170    5626 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:17:01.668028    5626 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:17:01.671129    5626 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:17:01.674158    5626 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:17:01.677519    5626 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:17:01.677604    5626 config.go:182] Loaded profile config "stopped-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:17:01.677663    5626 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:17:01.682123    5626 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:17:01.689177    5626 start.go:297] selected driver: qemu2
	I0802 11:17:01.689185    5626 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:17:01.689192    5626 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:17:01.691633    5626 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:17:01.695089    5626 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:17:01.698199    5626 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:17:01.698229    5626 cni.go:84] Creating CNI manager for "flannel"
	I0802 11:17:01.698233    5626 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0802 11:17:01.698256    5626 start.go:340] cluster config:
	{Name:flannel-800000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:17:01.701864    5626 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:17:01.710124    5626 out.go:177] * Starting "flannel-800000" primary control-plane node in "flannel-800000" cluster
	I0802 11:17:01.713118    5626 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:17:01.713132    5626 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:17:01.713138    5626 cache.go:56] Caching tarball of preloaded images
	I0802 11:17:01.713198    5626 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:17:01.713204    5626 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:17:01.713258    5626 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/flannel-800000/config.json ...
	I0802 11:17:01.713280    5626 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/flannel-800000/config.json: {Name:mk7b5d3d6b274b5d8123507d6b6eb049af859de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:17:01.713465    5626 start.go:360] acquireMachinesLock for flannel-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:17:01.713495    5626 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "flannel-800000"
	I0802 11:17:01.713504    5626 start.go:93] Provisioning new machine with config: &{Name:flannel-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:17:01.713533    5626 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:17:01.721104    5626 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:17:01.736073    5626 start.go:159] libmachine.API.Create for "flannel-800000" (driver="qemu2")
	I0802 11:17:01.736102    5626 client.go:168] LocalClient.Create starting
	I0802 11:17:01.736167    5626 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:17:01.736199    5626 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:01.736210    5626 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:01.736250    5626 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:17:01.736273    5626 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:01.736282    5626 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:01.736625    5626 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:17:01.889005    5626 main.go:141] libmachine: Creating SSH key...
	I0802 11:17:02.082867    5626 main.go:141] libmachine: Creating Disk image...
	I0802 11:17:02.082878    5626 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:17:02.083091    5626 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/disk.qcow2
	I0802 11:17:02.092743    5626 main.go:141] libmachine: STDOUT: 
	I0802 11:17:02.092764    5626 main.go:141] libmachine: STDERR: 
	I0802 11:17:02.092821    5626 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/disk.qcow2 +20000M
	I0802 11:17:02.100842    5626 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:17:02.100865    5626 main.go:141] libmachine: STDERR: 
	I0802 11:17:02.100898    5626 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/disk.qcow2
	I0802 11:17:02.100903    5626 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:17:02.100912    5626 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:17:02.100935    5626 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:d0:92:e0:09:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/disk.qcow2
	I0802 11:17:02.102642    5626 main.go:141] libmachine: STDOUT: 
	I0802 11:17:02.102658    5626 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:17:02.102681    5626 client.go:171] duration metric: took 366.631708ms to LocalClient.Create
	I0802 11:17:04.104592    5626 start.go:128] duration metric: took 2.391383375s to createHost
	I0802 11:17:04.104685    5626 start.go:83] releasing machines lock for "flannel-800000", held for 2.391541375s
	W0802 11:17:04.104756    5626 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:17:04.111583    5626 out.go:177] * Deleting "flannel-800000" in qemu2 ...
	W0802 11:17:04.141729    5626 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:17:04.141764    5626 start.go:729] Will try again in 5 seconds ...
	I0802 11:17:09.143450    5626 start.go:360] acquireMachinesLock for flannel-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:17:09.144150    5626 start.go:364] duration metric: took 586.25µs to acquireMachinesLock for "flannel-800000"
	I0802 11:17:09.144235    5626 start.go:93] Provisioning new machine with config: &{Name:flannel-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:17:09.144462    5626 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:17:09.152012    5626 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:17:09.199609    5626 start.go:159] libmachine.API.Create for "flannel-800000" (driver="qemu2")
	I0802 11:17:09.199664    5626 client.go:168] LocalClient.Create starting
	I0802 11:17:09.199802    5626 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:17:09.199874    5626 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:09.199897    5626 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:09.199966    5626 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:17:09.200011    5626 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:09.200027    5626 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:09.200518    5626 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:17:09.359155    5626 main.go:141] libmachine: Creating SSH key...
	I0802 11:17:09.442931    5626 main.go:141] libmachine: Creating Disk image...
	I0802 11:17:09.442938    5626 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:17:09.443141    5626 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/disk.qcow2
	I0802 11:17:09.452788    5626 main.go:141] libmachine: STDOUT: 
	I0802 11:17:09.452814    5626 main.go:141] libmachine: STDERR: 
	I0802 11:17:09.452874    5626 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/disk.qcow2 +20000M
	I0802 11:17:09.461054    5626 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:17:09.461078    5626 main.go:141] libmachine: STDERR: 
	I0802 11:17:09.461100    5626 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/disk.qcow2
	I0802 11:17:09.461104    5626 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:17:09.461121    5626 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:17:09.461146    5626 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:e3:5c:12:c6:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/flannel-800000/disk.qcow2
	I0802 11:17:09.463293    5626 main.go:141] libmachine: STDOUT: 
	I0802 11:17:09.463325    5626 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:17:09.463339    5626 client.go:171] duration metric: took 263.695084ms to LocalClient.Create
	I0802 11:17:11.465217    5626 start.go:128] duration metric: took 2.320978042s to createHost
	I0802 11:17:11.465267    5626 start.go:83] releasing machines lock for "flannel-800000", held for 2.321344625s
	W0802 11:17:11.465392    5626 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:17:11.477109    5626 out.go:177] 
	W0802 11:17:11.482203    5626 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:17:11.482209    5626 out.go:239] * 
	* 
	W0802 11:17:11.482749    5626 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:17:11.490172    5626 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
E0802 11:17:14.993548    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.962324417s)

                                                
                                                
-- stdout --
	* [bridge-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-800000" primary control-plane node in "bridge-800000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-800000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:17:13.874699    5744 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:17:13.874801    5744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:17:13.874806    5744 out.go:304] Setting ErrFile to fd 2...
	I0802 11:17:13.874809    5744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:17:13.874939    5744 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:17:13.876072    5744 out.go:298] Setting JSON to false
	I0802 11:17:13.892750    5744 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4597,"bootTime":1722618036,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:17:13.892858    5744 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:17:13.900407    5744 out.go:177] * [bridge-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:17:13.905401    5744 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:17:13.905499    5744 notify.go:220] Checking for updates...
	I0802 11:17:13.912258    5744 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:17:13.915307    5744 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:17:13.918378    5744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:17:13.919800    5744 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:17:13.923372    5744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:17:13.926693    5744 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:17:13.926761    5744 config.go:182] Loaded profile config "stopped-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:17:13.926814    5744 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:17:13.931215    5744 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:17:13.938371    5744 start.go:297] selected driver: qemu2
	I0802 11:17:13.938379    5744 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:17:13.938386    5744 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:17:13.940534    5744 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:17:13.944195    5744 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:17:13.947485    5744 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:17:13.947502    5744 cni.go:84] Creating CNI manager for "bridge"
	I0802 11:17:13.947506    5744 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 11:17:13.947548    5744 start.go:340] cluster config:
	{Name:bridge-800000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:17:13.950915    5744 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:17:13.958268    5744 out.go:177] * Starting "bridge-800000" primary control-plane node in "bridge-800000" cluster
	I0802 11:17:13.962380    5744 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:17:13.962395    5744 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:17:13.962414    5744 cache.go:56] Caching tarball of preloaded images
	I0802 11:17:13.962470    5744 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:17:13.962484    5744 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:17:13.962535    5744 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/bridge-800000/config.json ...
	I0802 11:17:13.962551    5744 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/bridge-800000/config.json: {Name:mkca991f75b24b8c6cf81303b586affaa0cf93d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:17:13.962881    5744 start.go:360] acquireMachinesLock for bridge-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:17:13.962916    5744 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "bridge-800000"
	I0802 11:17:13.962925    5744 start.go:93] Provisioning new machine with config: &{Name:bridge-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:17:13.962955    5744 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:17:13.967333    5744 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:17:13.982718    5744 start.go:159] libmachine.API.Create for "bridge-800000" (driver="qemu2")
	I0802 11:17:13.982742    5744 client.go:168] LocalClient.Create starting
	I0802 11:17:13.982798    5744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:17:13.982826    5744 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:13.982834    5744 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:13.982876    5744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:17:13.982899    5744 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:13.982908    5744 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:13.983289    5744 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:17:14.135594    5744 main.go:141] libmachine: Creating SSH key...
	I0802 11:17:14.260421    5744 main.go:141] libmachine: Creating Disk image...
	I0802 11:17:14.260427    5744 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:17:14.260601    5744 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/disk.qcow2
	I0802 11:17:14.269684    5744 main.go:141] libmachine: STDOUT: 
	I0802 11:17:14.269704    5744 main.go:141] libmachine: STDERR: 
	I0802 11:17:14.269768    5744 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/disk.qcow2 +20000M
	I0802 11:17:14.277591    5744 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:17:14.277610    5744 main.go:141] libmachine: STDERR: 
	I0802 11:17:14.277625    5744 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/disk.qcow2
	I0802 11:17:14.277631    5744 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:17:14.277643    5744 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:17:14.277670    5744 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:f6:7f:60:3f:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/disk.qcow2
	I0802 11:17:14.279259    5744 main.go:141] libmachine: STDOUT: 
	I0802 11:17:14.279273    5744 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:17:14.279291    5744 client.go:171] duration metric: took 296.572167ms to LocalClient.Create
	I0802 11:17:16.281312    5744 start.go:128] duration metric: took 2.318537375s to createHost
	I0802 11:17:16.281408    5744 start.go:83] releasing machines lock for "bridge-800000", held for 2.318687458s
	W0802 11:17:16.281480    5744 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:17:16.292053    5744 out.go:177] * Deleting "bridge-800000" in qemu2 ...
	W0802 11:17:16.322236    5744 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:17:16.322260    5744 start.go:729] Will try again in 5 seconds ...
	I0802 11:17:21.322140    5744 start.go:360] acquireMachinesLock for bridge-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:17:21.322300    5744 start.go:364] duration metric: took 124.5µs to acquireMachinesLock for "bridge-800000"
	I0802 11:17:21.322330    5744 start.go:93] Provisioning new machine with config: &{Name:bridge-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:17:21.322379    5744 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:17:21.326888    5744 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:17:21.343314    5744 start.go:159] libmachine.API.Create for "bridge-800000" (driver="qemu2")
	I0802 11:17:21.343340    5744 client.go:168] LocalClient.Create starting
	I0802 11:17:21.343428    5744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:17:21.343469    5744 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:21.343482    5744 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:21.343520    5744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:17:21.343545    5744 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:21.343551    5744 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:21.343859    5744 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:17:21.659725    5744 main.go:141] libmachine: Creating SSH key...
	I0802 11:17:21.747622    5744 main.go:141] libmachine: Creating Disk image...
	I0802 11:17:21.747629    5744 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:17:21.747854    5744 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/disk.qcow2
	I0802 11:17:21.757429    5744 main.go:141] libmachine: STDOUT: 
	I0802 11:17:21.757446    5744 main.go:141] libmachine: STDERR: 
	I0802 11:17:21.757502    5744 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/disk.qcow2 +20000M
	I0802 11:17:21.765479    5744 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:17:21.765497    5744 main.go:141] libmachine: STDERR: 
	I0802 11:17:21.765509    5744 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/disk.qcow2
	I0802 11:17:21.765514    5744 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:17:21.765521    5744 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:17:21.765554    5744 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:57:55:98:85:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/bridge-800000/disk.qcow2
	I0802 11:17:21.767189    5744 main.go:141] libmachine: STDOUT: 
	I0802 11:17:21.767212    5744 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:17:21.767225    5744 client.go:171] duration metric: took 423.912167ms to LocalClient.Create
	I0802 11:17:23.769228    5744 start.go:128] duration metric: took 2.446999625s to createHost
	I0802 11:17:23.769278    5744 start.go:83] releasing machines lock for "bridge-800000", held for 2.447139542s
	W0802 11:17:23.769422    5744 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:17:23.781756    5744 out.go:177] 
	W0802 11:17:23.785650    5744 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:17:23.785658    5744 out.go:239] * 
	* 
	W0802 11:17:23.786314    5744 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:17:23.797728    5744 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-800000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.066109958s)

                                                
                                                
-- stdout --
	* [kubenet-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-800000" primary control-plane node in "kubenet-800000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-800000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:17:25.926607    5861 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:17:25.926738    5861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:17:25.926742    5861 out.go:304] Setting ErrFile to fd 2...
	I0802 11:17:25.926744    5861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:17:25.926868    5861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:17:25.928202    5861 out.go:298] Setting JSON to false
	I0802 11:17:25.944809    5861 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4609,"bootTime":1722618036,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:17:25.944871    5861 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:17:25.950694    5861 out.go:177] * [kubenet-800000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:17:25.958498    5861 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:17:25.958547    5861 notify.go:220] Checking for updates...
	I0802 11:17:25.966450    5861 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:17:25.970499    5861 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:17:25.974454    5861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:17:25.978448    5861 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:17:25.981470    5861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:17:25.984826    5861 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:17:25.984896    5861 config.go:182] Loaded profile config "stopped-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:17:25.984942    5861 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:17:25.989405    5861 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:17:25.996453    5861 start.go:297] selected driver: qemu2
	I0802 11:17:25.996459    5861 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:17:25.996464    5861 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:17:25.998655    5861 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:17:26.002385    5861 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:17:26.005625    5861 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:17:26.005671    5861 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0802 11:17:26.005707    5861 start.go:340] cluster config:
	{Name:kubenet-800000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:17:26.009607    5861 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:17:26.017488    5861 out.go:177] * Starting "kubenet-800000" primary control-plane node in "kubenet-800000" cluster
	I0802 11:17:26.021458    5861 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:17:26.021481    5861 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:17:26.021493    5861 cache.go:56] Caching tarball of preloaded images
	I0802 11:17:26.021569    5861 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:17:26.021574    5861 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:17:26.021638    5861 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/kubenet-800000/config.json ...
	I0802 11:17:26.021649    5861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/kubenet-800000/config.json: {Name:mkf2131bfab458303d7ab4aaa351f442f815a1a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:17:26.021982    5861 start.go:360] acquireMachinesLock for kubenet-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:17:26.022014    5861 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "kubenet-800000"
	I0802 11:17:26.022024    5861 start.go:93] Provisioning new machine with config: &{Name:kubenet-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:17:26.022062    5861 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:17:26.026481    5861 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:17:26.043455    5861 start.go:159] libmachine.API.Create for "kubenet-800000" (driver="qemu2")
	I0802 11:17:26.043476    5861 client.go:168] LocalClient.Create starting
	I0802 11:17:26.043531    5861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:17:26.043560    5861 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:26.043567    5861 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:26.043605    5861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:17:26.043631    5861 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:26.043637    5861 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:26.044083    5861 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:17:26.197240    5861 main.go:141] libmachine: Creating SSH key...
	I0802 11:17:26.394281    5861 main.go:141] libmachine: Creating Disk image...
	I0802 11:17:26.394287    5861 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:17:26.394523    5861 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/disk.qcow2
	I0802 11:17:26.404411    5861 main.go:141] libmachine: STDOUT: 
	I0802 11:17:26.404431    5861 main.go:141] libmachine: STDERR: 
	I0802 11:17:26.404478    5861 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/disk.qcow2 +20000M
	I0802 11:17:26.412720    5861 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:17:26.412737    5861 main.go:141] libmachine: STDERR: 
	I0802 11:17:26.412754    5861 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/disk.qcow2
	I0802 11:17:26.412759    5861 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:17:26.412769    5861 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:17:26.412799    5861 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:5e:9b:a2:b5:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/disk.qcow2
	I0802 11:17:26.414444    5861 main.go:141] libmachine: STDOUT: 
	I0802 11:17:26.414463    5861 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:17:26.414482    5861 client.go:171] duration metric: took 371.024083ms to LocalClient.Create
	I0802 11:17:28.416560    5861 start.go:128] duration metric: took 2.394610375s to createHost
	I0802 11:17:28.416644    5861 start.go:83] releasing machines lock for "kubenet-800000", held for 2.394766416s
	W0802 11:17:28.416694    5861 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:17:28.427635    5861 out.go:177] * Deleting "kubenet-800000" in qemu2 ...
	W0802 11:17:28.460224    5861 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:17:28.460255    5861 start.go:729] Will try again in 5 seconds ...
	I0802 11:17:33.461193    5861 start.go:360] acquireMachinesLock for kubenet-800000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:17:33.461702    5861 start.go:364] duration metric: took 398.084µs to acquireMachinesLock for "kubenet-800000"
	I0802 11:17:33.461845    5861 start.go:93] Provisioning new machine with config: &{Name:kubenet-800000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:17:33.462044    5861 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:17:33.470907    5861 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 11:17:33.512898    5861 start.go:159] libmachine.API.Create for "kubenet-800000" (driver="qemu2")
	I0802 11:17:33.512943    5861 client.go:168] LocalClient.Create starting
	I0802 11:17:33.513071    5861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:17:33.513165    5861 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:33.513180    5861 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:33.513238    5861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:17:33.513285    5861 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:33.513294    5861 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:33.513803    5861 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:17:33.674278    5861 main.go:141] libmachine: Creating SSH key...
	I0802 11:17:33.896154    5861 main.go:141] libmachine: Creating Disk image...
	I0802 11:17:33.896165    5861 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:17:33.896385    5861 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/disk.qcow2
	I0802 11:17:33.907104    5861 main.go:141] libmachine: STDOUT: 
	I0802 11:17:33.907135    5861 main.go:141] libmachine: STDERR: 
	I0802 11:17:33.907200    5861 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/disk.qcow2 +20000M
	I0802 11:17:33.916173    5861 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:17:33.916191    5861 main.go:141] libmachine: STDERR: 
	I0802 11:17:33.916207    5861 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/disk.qcow2
	I0802 11:17:33.916216    5861 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:17:33.916224    5861 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:17:33.916260    5861 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:43:bf:7f:74:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/kubenet-800000/disk.qcow2
	I0802 11:17:33.918457    5861 main.go:141] libmachine: STDOUT: 
	I0802 11:17:33.918475    5861 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:17:33.918500    5861 client.go:171] duration metric: took 405.57375ms to LocalClient.Create
	I0802 11:17:35.919964    5861 start.go:128] duration metric: took 2.458028167s to createHost
	I0802 11:17:35.919997    5861 start.go:83] releasing machines lock for "kubenet-800000", held for 2.458393917s
	W0802 11:17:35.920154    5861 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-800000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:17:35.930672    5861 out.go:177] 
	W0802 11:17:35.941659    5861 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:17:35.941670    5861 out.go:239] * 
	* 
	W0802 11:17:35.942701    5861 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:17:35.953652    5861 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-752000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-752000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.007958625s)

                                                
                                                
-- stdout --
	* [old-k8s-version-752000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-752000" primary control-plane node in "old-k8s-version-752000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-752000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:17:38.250499    5974 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:17:38.250623    5974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:17:38.250627    5974 out.go:304] Setting ErrFile to fd 2...
	I0802 11:17:38.250629    5974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:17:38.250763    5974 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:17:38.251857    5974 out.go:298] Setting JSON to false
	I0802 11:17:38.268293    5974 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4622,"bootTime":1722618036,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:17:38.268355    5974 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:17:38.273724    5974 out.go:177] * [old-k8s-version-752000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:17:38.281740    5974 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:17:38.281785    5974 notify.go:220] Checking for updates...
	I0802 11:17:38.288683    5974 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:17:38.290211    5974 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:17:38.294610    5974 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:17:38.297648    5974 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:17:38.298930    5974 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:17:38.302072    5974 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:17:38.302140    5974 config.go:182] Loaded profile config "stopped-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0802 11:17:38.302189    5974 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:17:38.306651    5974 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:17:38.311683    5974 start.go:297] selected driver: qemu2
	I0802 11:17:38.311690    5974 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:17:38.311698    5974 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:17:38.314093    5974 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:17:38.317647    5974 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:17:38.319120    5974 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:17:38.319146    5974 cni.go:84] Creating CNI manager for ""
	I0802 11:17:38.319153    5974 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0802 11:17:38.319181    5974 start.go:340] cluster config:
	{Name:old-k8s-version-752000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-752000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:17:38.322810    5974 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:17:38.332700    5974 out.go:177] * Starting "old-k8s-version-752000" primary control-plane node in "old-k8s-version-752000" cluster
	I0802 11:17:38.335632    5974 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0802 11:17:38.335653    5974 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0802 11:17:38.335661    5974 cache.go:56] Caching tarball of preloaded images
	I0802 11:17:38.335724    5974 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:17:38.335729    5974 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0802 11:17:38.335782    5974 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/old-k8s-version-752000/config.json ...
	I0802 11:17:38.335793    5974 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/old-k8s-version-752000/config.json: {Name:mk2df4d28bfa960fad632d785bc4dc20cac5e9c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:17:38.335998    5974 start.go:360] acquireMachinesLock for old-k8s-version-752000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:17:38.336030    5974 start.go:364] duration metric: took 25.458µs to acquireMachinesLock for "old-k8s-version-752000"
	I0802 11:17:38.336040    5974 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-752000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-752000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:17:38.336066    5974 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:17:38.343584    5974 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 11:17:38.359317    5974 start.go:159] libmachine.API.Create for "old-k8s-version-752000" (driver="qemu2")
	I0802 11:17:38.359351    5974 client.go:168] LocalClient.Create starting
	I0802 11:17:38.359419    5974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:17:38.359449    5974 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:38.359458    5974 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:38.359497    5974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:17:38.359519    5974 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:38.359525    5974 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:38.359871    5974 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:17:38.511319    5974 main.go:141] libmachine: Creating SSH key...
	I0802 11:17:38.777363    5974 main.go:141] libmachine: Creating Disk image...
	I0802 11:17:38.777375    5974 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:17:38.777613    5974 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/disk.qcow2
	I0802 11:17:38.787585    5974 main.go:141] libmachine: STDOUT: 
	I0802 11:17:38.787606    5974 main.go:141] libmachine: STDERR: 
	I0802 11:17:38.787653    5974 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/disk.qcow2 +20000M
	I0802 11:17:38.795643    5974 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:17:38.795658    5974 main.go:141] libmachine: STDERR: 
	I0802 11:17:38.795675    5974 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/disk.qcow2
	I0802 11:17:38.795680    5974 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:17:38.795693    5974 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:17:38.795723    5974 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:ec:e8:f6:97:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/disk.qcow2
	I0802 11:17:38.797348    5974 main.go:141] libmachine: STDOUT: 
	I0802 11:17:38.797364    5974 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:17:38.797381    5974 client.go:171] duration metric: took 438.045542ms to LocalClient.Create
	I0802 11:17:40.799489    5974 start.go:128] duration metric: took 2.463510666s to createHost
	I0802 11:17:40.799558    5974 start.go:83] releasing machines lock for "old-k8s-version-752000", held for 2.463636959s
	W0802 11:17:40.799722    5974 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:17:40.805467    5974 out.go:177] * Deleting "old-k8s-version-752000" in qemu2 ...
	W0802 11:17:40.836712    5974 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:17:40.836738    5974 start.go:729] Will try again in 5 seconds ...
	I0802 11:17:45.838613    5974 start.go:360] acquireMachinesLock for old-k8s-version-752000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:17:45.838761    5974 start.go:364] duration metric: took 110.666µs to acquireMachinesLock for "old-k8s-version-752000"
	I0802 11:17:45.838780    5974 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-752000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-752000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:17:45.838850    5974 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:17:45.847008    5974 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 11:17:45.866326    5974 start.go:159] libmachine.API.Create for "old-k8s-version-752000" (driver="qemu2")
	I0802 11:17:45.866353    5974 client.go:168] LocalClient.Create starting
	I0802 11:17:45.866409    5974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:17:45.866448    5974 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:45.866457    5974 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:45.866489    5974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:17:45.866516    5974 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:45.866522    5974 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:45.866794    5974 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:17:46.017979    5974 main.go:141] libmachine: Creating SSH key...
	I0802 11:17:46.165830    5974 main.go:141] libmachine: Creating Disk image...
	I0802 11:17:46.165844    5974 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:17:46.166051    5974 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/disk.qcow2
	I0802 11:17:46.175528    5974 main.go:141] libmachine: STDOUT: 
	I0802 11:17:46.175545    5974 main.go:141] libmachine: STDERR: 
	I0802 11:17:46.175590    5974 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/disk.qcow2 +20000M
	I0802 11:17:46.183841    5974 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:17:46.183867    5974 main.go:141] libmachine: STDERR: 
	I0802 11:17:46.183882    5974 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/disk.qcow2
	I0802 11:17:46.183888    5974 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:17:46.183897    5974 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:17:46.183932    5974 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:52:2e:b4:94:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/disk.qcow2
	I0802 11:17:46.185710    5974 main.go:141] libmachine: STDOUT: 
	I0802 11:17:46.185727    5974 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:17:46.185744    5974 client.go:171] duration metric: took 319.401208ms to LocalClient.Create
	I0802 11:17:48.187837    5974 start.go:128] duration metric: took 2.34906075s to createHost
	I0802 11:17:48.187891    5974 start.go:83] releasing machines lock for "old-k8s-version-752000", held for 2.349220125s
	W0802 11:17:48.188412    5974 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-752000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-752000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:17:48.197987    5974 out.go:177] 
	W0802 11:17:48.204036    5974 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:17:48.204054    5974 out.go:239] * 
	* 
	W0802 11:17:48.205513    5974 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:17:48.215992    5974 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-752000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000: exit status 7 (61.54775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-752000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-752000 create -f testdata/busybox.yaml: exit status 1 (29.274459ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-752000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-752000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000: exit status 7 (29.017583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-752000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000: exit status 7 (29.315375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-752000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-752000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-752000 describe deploy/metrics-server -n kube-system: exit status 1 (27.22075ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-752000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-752000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000: exit status 7 (29.582417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-752000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-752000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.206608667s)

                                                
                                                
-- stdout --
	* [old-k8s-version-752000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-752000" primary control-plane node in "old-k8s-version-752000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-752000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-752000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:17:51.992510    6025 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:17:51.992636    6025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:17:51.992639    6025 out.go:304] Setting ErrFile to fd 2...
	I0802 11:17:51.992642    6025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:17:51.992769    6025 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:17:51.993824    6025 out.go:298] Setting JSON to false
	I0802 11:17:52.009908    6025 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4635,"bootTime":1722618036,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:17:52.010109    6025 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:17:52.014958    6025 out.go:177] * [old-k8s-version-752000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:17:52.021883    6025 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:17:52.021913    6025 notify.go:220] Checking for updates...
	I0802 11:17:52.028874    6025 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:17:52.032916    6025 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:17:52.041899    6025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:17:52.049808    6025 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:17:52.052858    6025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:17:52.056278    6025 config.go:182] Loaded profile config "old-k8s-version-752000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0802 11:17:52.059833    6025 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0802 11:17:52.062935    6025 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:17:52.066886    6025 out.go:177] * Using the qemu2 driver based on existing profile
	I0802 11:17:52.073866    6025 start.go:297] selected driver: qemu2
	I0802 11:17:52.073874    6025 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-752000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-752000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:17:52.073930    6025 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:17:52.076247    6025 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:17:52.076270    6025 cni.go:84] Creating CNI manager for ""
	I0802 11:17:52.076278    6025 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0802 11:17:52.076317    6025 start.go:340] cluster config:
	{Name:old-k8s-version-752000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-752000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:17:52.079813    6025 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:17:52.087851    6025 out.go:177] * Starting "old-k8s-version-752000" primary control-plane node in "old-k8s-version-752000" cluster
	I0802 11:17:52.091906    6025 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0802 11:17:52.091919    6025 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0802 11:17:52.091926    6025 cache.go:56] Caching tarball of preloaded images
	I0802 11:17:52.091979    6025 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:17:52.091984    6025 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0802 11:17:52.092034    6025 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/old-k8s-version-752000/config.json ...
	I0802 11:17:52.092369    6025 start.go:360] acquireMachinesLock for old-k8s-version-752000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:17:52.092396    6025 start.go:364] duration metric: took 20.959µs to acquireMachinesLock for "old-k8s-version-752000"
	I0802 11:17:52.092404    6025 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:17:52.092410    6025 fix.go:54] fixHost starting: 
	I0802 11:17:52.092519    6025 fix.go:112] recreateIfNeeded on old-k8s-version-752000: state=Stopped err=<nil>
	W0802 11:17:52.092527    6025 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:17:52.096892    6025 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-752000" ...
	I0802 11:17:52.104714    6025 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:17:52.104746    6025 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:52:2e:b4:94:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/disk.qcow2
	I0802 11:17:52.106546    6025 main.go:141] libmachine: STDOUT: 
	I0802 11:17:52.106563    6025 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:17:52.106593    6025 fix.go:56] duration metric: took 14.183708ms for fixHost
	I0802 11:17:52.106597    6025 start.go:83] releasing machines lock for "old-k8s-version-752000", held for 14.19825ms
	W0802 11:17:52.106603    6025 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:17:52.106630    6025 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:17:52.106634    6025 start.go:729] Will try again in 5 seconds ...
	I0802 11:17:57.107051    6025 start.go:360] acquireMachinesLock for old-k8s-version-752000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:17:57.107514    6025 start.go:364] duration metric: took 355.833µs to acquireMachinesLock for "old-k8s-version-752000"
	I0802 11:17:57.107635    6025 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:17:57.107654    6025 fix.go:54] fixHost starting: 
	I0802 11:17:57.108406    6025 fix.go:112] recreateIfNeeded on old-k8s-version-752000: state=Stopped err=<nil>
	W0802 11:17:57.108434    6025 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:17:57.113163    6025 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-752000" ...
	I0802 11:17:57.124091    6025 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:17:57.124331    6025 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:52:2e:b4:94:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/old-k8s-version-752000/disk.qcow2
	I0802 11:17:57.134484    6025 main.go:141] libmachine: STDOUT: 
	I0802 11:17:57.134547    6025 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:17:57.134657    6025 fix.go:56] duration metric: took 27.004709ms for fixHost
	I0802 11:17:57.134680    6025 start.go:83] releasing machines lock for "old-k8s-version-752000", held for 27.141208ms
	W0802 11:17:57.134856    6025 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-752000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-752000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:17:57.141845    6025 out.go:177] 
	W0802 11:17:57.146767    6025 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:17:57.146818    6025 out.go:239] * 
	* 
	W0802 11:17:57.149333    6025 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:17:57.158856    6025 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-752000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000: exit status 7 (65.372292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-501000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-501000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (9.964757417s)

                                                
                                                
-- stdout --
	* [no-preload-501000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-501000" primary control-plane node in "no-preload-501000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-501000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:17:53.919099    6035 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:17:53.919237    6035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:17:53.919240    6035 out.go:304] Setting ErrFile to fd 2...
	I0802 11:17:53.919243    6035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:17:53.919367    6035 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:17:53.920399    6035 out.go:298] Setting JSON to false
	I0802 11:17:53.936444    6035 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4637,"bootTime":1722618036,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:17:53.936526    6035 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:17:53.940469    6035 out.go:177] * [no-preload-501000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:17:53.944518    6035 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:17:53.944593    6035 notify.go:220] Checking for updates...
	I0802 11:17:53.949414    6035 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:17:53.952470    6035 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:17:53.953619    6035 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:17:53.956443    6035 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:17:53.959474    6035 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:17:53.962887    6035 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:17:53.962961    6035 config.go:182] Loaded profile config "old-k8s-version-752000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0802 11:17:53.963003    6035 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:17:53.967388    6035 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:17:53.974485    6035 start.go:297] selected driver: qemu2
	I0802 11:17:53.974493    6035 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:17:53.974500    6035 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:17:53.976798    6035 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:17:53.980455    6035 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:17:53.983554    6035 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:17:53.983587    6035 cni.go:84] Creating CNI manager for ""
	I0802 11:17:53.983596    6035 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:17:53.983600    6035 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 11:17:53.983634    6035 start.go:340] cluster config:
	{Name:no-preload-501000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-501000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:17:53.987215    6035 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:17:53.994445    6035 out.go:177] * Starting "no-preload-501000" primary control-plane node in "no-preload-501000" cluster
	I0802 11:17:53.998471    6035 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0802 11:17:53.998546    6035 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/no-preload-501000/config.json ...
	I0802 11:17:53.998565    6035 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/no-preload-501000/config.json: {Name:mk383665d964dff4ad729729fdcb2ee848a55e14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:17:53.998590    6035 cache.go:107] acquiring lock: {Name:mk3115db8876b96740ef61c362e182fe6c315e12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:17:53.998597    6035 cache.go:107] acquiring lock: {Name:mkd24349b9efbfb6b274584840d4e80d923c1e3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:17:53.998606    6035 cache.go:107] acquiring lock: {Name:mk33b3c9740cd09600a19ba2afe63b9b97f0eb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:17:53.998649    6035 cache.go:115] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0802 11:17:53.998656    6035 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 67.75µs
	I0802 11:17:53.998666    6035 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0802 11:17:53.998677    6035 cache.go:107] acquiring lock: {Name:mk11013d08f2e9e452bdb57fd21f573cf8e14e7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:17:53.998760    6035 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0802 11:17:53.998767    6035 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0802 11:17:53.998781    6035 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0802 11:17:53.998590    6035 cache.go:107] acquiring lock: {Name:mk34561774b0989fa544216a95a5decb104d7537 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:17:53.998827    6035 cache.go:107] acquiring lock: {Name:mk0c9d7af24b10d9f8ea5ceb6d4bcf6cd6fb8c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:17:53.998908    6035 start.go:360] acquireMachinesLock for no-preload-501000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:17:53.998904    6035 cache.go:107] acquiring lock: {Name:mkeaf65f34c288dccd02546bb6ff755bca87710a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:17:53.998946    6035 start.go:364] duration metric: took 32.083µs to acquireMachinesLock for "no-preload-501000"
	I0802 11:17:53.998954    6035 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0802 11:17:53.998982    6035 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0802 11:17:53.998910    6035 cache.go:107] acquiring lock: {Name:mkb1f5789d5d9b46987a3b099b80ac69a9b138f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:17:53.998956    6035 start.go:93] Provisioning new machine with config: &{Name:no-preload-501000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-501000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:17:53.998994    6035 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0802 11:17:53.998998    6035 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:17:53.999067    6035 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0802 11:17:54.002496    6035 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 11:17:54.011137    6035 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0802 11:17:54.011617    6035 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0802 11:17:54.011615    6035 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0802 11:17:54.013539    6035 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0802 11:17:54.013682    6035 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0802 11:17:54.013736    6035 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0802 11:17:54.013786    6035 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0802 11:17:54.019631    6035 start.go:159] libmachine.API.Create for "no-preload-501000" (driver="qemu2")
	I0802 11:17:54.019651    6035 client.go:168] LocalClient.Create starting
	I0802 11:17:54.019726    6035 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:17:54.019756    6035 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:54.019765    6035 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:54.019810    6035 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:17:54.019835    6035 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:54.019844    6035 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:54.020216    6035 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:17:54.174547    6035 main.go:141] libmachine: Creating SSH key...
	I0802 11:17:54.314146    6035 main.go:141] libmachine: Creating Disk image...
	I0802 11:17:54.314160    6035 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:17:54.314345    6035 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/disk.qcow2
	I0802 11:17:54.323443    6035 main.go:141] libmachine: STDOUT: 
	I0802 11:17:54.323458    6035 main.go:141] libmachine: STDERR: 
	I0802 11:17:54.323498    6035 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/disk.qcow2 +20000M
	I0802 11:17:54.331381    6035 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:17:54.331392    6035 main.go:141] libmachine: STDERR: 
	I0802 11:17:54.331403    6035 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/disk.qcow2
	I0802 11:17:54.331409    6035 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:17:54.331420    6035 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:17:54.331444    6035 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:5e:37:8b:3c:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/disk.qcow2
	I0802 11:17:54.333075    6035 main.go:141] libmachine: STDOUT: 
	I0802 11:17:54.333090    6035 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:17:54.333109    6035 client.go:171] duration metric: took 313.466834ms to LocalClient.Create
	I0802 11:17:54.407112    6035 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0802 11:17:54.407126    6035 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0802 11:17:54.462923    6035 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0802 11:17:54.472277    6035 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0802 11:17:54.472640    6035 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0802 11:17:54.520253    6035 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0802 11:17:54.544887    6035 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0802 11:17:54.585661    6035 cache.go:157] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0802 11:17:54.585706    6035 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 587.118375ms
	I0802 11:17:54.585738    6035 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0802 11:17:56.082880    6035 cache.go:157] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0802 11:17:56.082932    6035 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 2.084251958s
	I0802 11:17:56.082955    6035 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0802 11:17:56.284293    6035 cache.go:157] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0802 11:17:56.284346    6035 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 2.285610667s
	I0802 11:17:56.284376    6035 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0802 11:17:56.333247    6035 start.go:128] duration metric: took 2.334330167s to createHost
	I0802 11:17:56.333303    6035 start.go:83] releasing machines lock for "no-preload-501000", held for 2.334442958s
	W0802 11:17:56.333362    6035 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:17:56.349908    6035 out.go:177] * Deleting "no-preload-501000" in qemu2 ...
	W0802 11:17:56.382985    6035 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:17:56.383012    6035 start.go:729] Will try again in 5 seconds ...
	I0802 11:17:58.506831    6035 cache.go:157] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0802 11:17:58.506890    6035 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 4.508484292s
	I0802 11:17:58.506911    6035 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0802 11:17:58.755934    6035 cache.go:157] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0802 11:17:58.756001    6035 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 4.757370209s
	I0802 11:17:58.756043    6035 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0802 11:17:58.936083    6035 cache.go:157] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0802 11:17:58.936151    6035 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 4.93775925s
	I0802 11:17:58.936184    6035 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0802 11:18:01.382996    6035 start.go:360] acquireMachinesLock for no-preload-501000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:18:01.383502    6035 start.go:364] duration metric: took 416.917µs to acquireMachinesLock for "no-preload-501000"
	I0802 11:18:01.383671    6035 start.go:93] Provisioning new machine with config: &{Name:no-preload-501000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-501000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:18:01.383920    6035 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:18:01.389576    6035 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 11:18:01.442086    6035 start.go:159] libmachine.API.Create for "no-preload-501000" (driver="qemu2")
	I0802 11:18:01.442129    6035 client.go:168] LocalClient.Create starting
	I0802 11:18:01.442239    6035 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:18:01.442305    6035 main.go:141] libmachine: Decoding PEM data...
	I0802 11:18:01.442322    6035 main.go:141] libmachine: Parsing certificate...
	I0802 11:18:01.442393    6035 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:18:01.442438    6035 main.go:141] libmachine: Decoding PEM data...
	I0802 11:18:01.442462    6035 main.go:141] libmachine: Parsing certificate...
	I0802 11:18:01.442966    6035 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:18:01.633364    6035 main.go:141] libmachine: Creating SSH key...
	I0802 11:18:01.784621    6035 main.go:141] libmachine: Creating Disk image...
	I0802 11:18:01.784630    6035 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:18:01.784840    6035 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/disk.qcow2
	I0802 11:18:01.795045    6035 main.go:141] libmachine: STDOUT: 
	I0802 11:18:01.795115    6035 main.go:141] libmachine: STDERR: 
	I0802 11:18:01.795168    6035 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/disk.qcow2 +20000M
	I0802 11:18:01.803050    6035 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:18:01.803067    6035 main.go:141] libmachine: STDERR: 
	I0802 11:18:01.803085    6035 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/disk.qcow2
	I0802 11:18:01.803089    6035 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:18:01.803100    6035 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:18:01.803138    6035 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:86:cf:02:60:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/disk.qcow2
	I0802 11:18:01.804837    6035 main.go:141] libmachine: STDOUT: 
	I0802 11:18:01.804852    6035 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:18:01.804864    6035 client.go:171] duration metric: took 362.743667ms to LocalClient.Create
	I0802 11:18:03.313359    6035 cache.go:157] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0802 11:18:03.313436    6035 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 9.315123s
	I0802 11:18:03.313472    6035 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0802 11:18:03.313516    6035 cache.go:87] Successfully saved all images to host disk.
	I0802 11:18:03.805228    6035 start.go:128] duration metric: took 2.421329s to createHost
	I0802 11:18:03.805285    6035 start.go:83] releasing machines lock for "no-preload-501000", held for 2.421844292s
	W0802 11:18:03.805540    6035 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-501000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-501000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:03.823970    6035 out.go:177] 
	W0802 11:18:03.828074    6035 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:18:03.828102    6035 out.go:239] * 
	* 
	W0802 11:18:03.830565    6035 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:18:03.843909    6035 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-501000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000: exit status 7 (64.040875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-752000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000: exit status 7 (31.787125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-752000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-752000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-752000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.577583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-752000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-752000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000: exit status 7 (28.5525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-752000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000: exit status 7 (29.41975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-752000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-752000 --alsologtostderr -v=1: exit status 83 (42.9795ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-752000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-752000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:17:57.422401    6085 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:17:57.422773    6085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:17:57.422778    6085 out.go:304] Setting ErrFile to fd 2...
	I0802 11:17:57.422781    6085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:17:57.422999    6085 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:17:57.423208    6085 out.go:298] Setting JSON to false
	I0802 11:17:57.423213    6085 mustload.go:65] Loading cluster: old-k8s-version-752000
	I0802 11:17:57.423392    6085 config.go:182] Loaded profile config "old-k8s-version-752000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0802 11:17:57.426818    6085 out.go:177] * The control-plane node old-k8s-version-752000 host is not running: state=Stopped
	I0802 11:17:57.434958    6085 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-752000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-752000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000: exit status 7 (27.61125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-752000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000: exit status 7 (27.869708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-797000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-797000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (10.0133875s)

                                                
                                                
-- stdout --
	* [embed-certs-797000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-797000" primary control-plane node in "embed-certs-797000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-797000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:17:57.734334    6102 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:17:57.734451    6102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:17:57.734454    6102 out.go:304] Setting ErrFile to fd 2...
	I0802 11:17:57.734457    6102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:17:57.734588    6102 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:17:57.735649    6102 out.go:298] Setting JSON to false
	I0802 11:17:57.752026    6102 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4641,"bootTime":1722618036,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:17:57.752098    6102 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:17:57.756817    6102 out.go:177] * [embed-certs-797000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:17:57.763938    6102 notify.go:220] Checking for updates...
	I0802 11:17:57.767824    6102 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:17:57.773271    6102 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:17:57.780861    6102 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:17:57.786789    6102 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:17:57.793812    6102 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:17:57.801870    6102 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:17:57.806125    6102 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:17:57.806192    6102 config.go:182] Loaded profile config "no-preload-501000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0802 11:17:57.806238    6102 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:17:57.809830    6102 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:17:57.816822    6102 start.go:297] selected driver: qemu2
	I0802 11:17:57.816834    6102 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:17:57.816840    6102 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:17:57.819344    6102 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:17:57.823801    6102 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:17:57.827949    6102 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:17:57.827970    6102 cni.go:84] Creating CNI manager for ""
	I0802 11:17:57.827979    6102 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:17:57.827988    6102 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 11:17:57.828034    6102 start.go:340] cluster config:
	{Name:embed-certs-797000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:17:57.831893    6102 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:17:57.835793    6102 out.go:177] * Starting "embed-certs-797000" primary control-plane node in "embed-certs-797000" cluster
	I0802 11:17:57.843825    6102 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:17:57.843843    6102 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:17:57.843857    6102 cache.go:56] Caching tarball of preloaded images
	I0802 11:17:57.843932    6102 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:17:57.843941    6102 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:17:57.844054    6102 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/embed-certs-797000/config.json ...
	I0802 11:17:57.844067    6102 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/embed-certs-797000/config.json: {Name:mk23165fa7f5a60690de230ea0c33de4d8de0bc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:17:57.844278    6102 start.go:360] acquireMachinesLock for embed-certs-797000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:17:57.844314    6102 start.go:364] duration metric: took 30.333µs to acquireMachinesLock for "embed-certs-797000"
	I0802 11:17:57.844329    6102 start.go:93] Provisioning new machine with config: &{Name:embed-certs-797000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:17:57.844357    6102 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:17:57.852835    6102 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 11:17:57.870785    6102 start.go:159] libmachine.API.Create for "embed-certs-797000" (driver="qemu2")
	I0802 11:17:57.870809    6102 client.go:168] LocalClient.Create starting
	I0802 11:17:57.870883    6102 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:17:57.870920    6102 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:57.870933    6102 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:57.870971    6102 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:17:57.871001    6102 main.go:141] libmachine: Decoding PEM data...
	I0802 11:17:57.871010    6102 main.go:141] libmachine: Parsing certificate...
	I0802 11:17:57.871365    6102 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:17:58.023658    6102 main.go:141] libmachine: Creating SSH key...
	I0802 11:17:58.216456    6102 main.go:141] libmachine: Creating Disk image...
	I0802 11:17:58.216463    6102 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:17:58.216674    6102 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/disk.qcow2
	I0802 11:17:58.226061    6102 main.go:141] libmachine: STDOUT: 
	I0802 11:17:58.226084    6102 main.go:141] libmachine: STDERR: 
	I0802 11:17:58.226136    6102 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/disk.qcow2 +20000M
	I0802 11:17:58.234346    6102 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:17:58.234362    6102 main.go:141] libmachine: STDERR: 
	I0802 11:17:58.234379    6102 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/disk.qcow2
	I0802 11:17:58.234383    6102 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:17:58.234396    6102 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:17:58.234425    6102 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:cb:e7:9a:23:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/disk.qcow2
	I0802 11:17:58.236143    6102 main.go:141] libmachine: STDOUT: 
	I0802 11:17:58.236158    6102 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:17:58.236173    6102 client.go:171] duration metric: took 365.373583ms to LocalClient.Create
	I0802 11:18:00.238277    6102 start.go:128] duration metric: took 2.393987583s to createHost
	I0802 11:18:00.238351    6102 start.go:83] releasing machines lock for "embed-certs-797000", held for 2.394122042s
	W0802 11:18:00.238444    6102 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:00.245634    6102 out.go:177] * Deleting "embed-certs-797000" in qemu2 ...
	W0802 11:18:00.274058    6102 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:00.274093    6102 start.go:729] Will try again in 5 seconds ...
	I0802 11:18:05.276058    6102 start.go:360] acquireMachinesLock for embed-certs-797000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:18:05.276512    6102 start.go:364] duration metric: took 319.375µs to acquireMachinesLock for "embed-certs-797000"
	I0802 11:18:05.276691    6102 start.go:93] Provisioning new machine with config: &{Name:embed-certs-797000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:18:05.277012    6102 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:18:05.285723    6102 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 11:18:05.336012    6102 start.go:159] libmachine.API.Create for "embed-certs-797000" (driver="qemu2")
	I0802 11:18:05.336060    6102 client.go:168] LocalClient.Create starting
	I0802 11:18:05.336164    6102 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:18:05.336215    6102 main.go:141] libmachine: Decoding PEM data...
	I0802 11:18:05.336236    6102 main.go:141] libmachine: Parsing certificate...
	I0802 11:18:05.336308    6102 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:18:05.336339    6102 main.go:141] libmachine: Decoding PEM data...
	I0802 11:18:05.336354    6102 main.go:141] libmachine: Parsing certificate...
	I0802 11:18:05.336929    6102 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:18:05.523475    6102 main.go:141] libmachine: Creating SSH key...
	I0802 11:18:05.646127    6102 main.go:141] libmachine: Creating Disk image...
	I0802 11:18:05.646141    6102 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:18:05.646335    6102 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/disk.qcow2
	I0802 11:18:05.655518    6102 main.go:141] libmachine: STDOUT: 
	I0802 11:18:05.655542    6102 main.go:141] libmachine: STDERR: 
	I0802 11:18:05.655591    6102 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/disk.qcow2 +20000M
	I0802 11:18:05.663311    6102 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:18:05.663325    6102 main.go:141] libmachine: STDERR: 
	I0802 11:18:05.663338    6102 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/disk.qcow2
	I0802 11:18:05.663348    6102 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:18:05.663358    6102 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:18:05.663394    6102 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:06:ac:95:b8:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/disk.qcow2
	I0802 11:18:05.665123    6102 main.go:141] libmachine: STDOUT: 
	I0802 11:18:05.665136    6102 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:18:05.665157    6102 client.go:171] duration metric: took 329.103292ms to LocalClient.Create
	I0802 11:18:07.667248    6102 start.go:128] duration metric: took 2.390269542s to createHost
	I0802 11:18:07.667294    6102 start.go:83] releasing machines lock for "embed-certs-797000", held for 2.390848541s
	W0802 11:18:07.667591    6102 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:07.684308    6102 out.go:177] 
	W0802 11:18:07.687412    6102 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:18:07.687451    6102 out.go:239] * 
	* 
	W0802 11:18:07.690222    6102 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:18:07.699219    6102 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-797000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000: exit status 7 (61.129625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-501000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-501000 create -f testdata/busybox.yaml: exit status 1 (29.514875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-501000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-501000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000: exit status 7 (29.084958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-501000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000: exit status 7 (28.224333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-501000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-501000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-501000 describe deploy/metrics-server -n kube-system: exit status 1 (26.568416ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-501000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-501000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000: exit status 7 (28.53925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-797000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-797000 create -f testdata/busybox.yaml: exit status 1 (33.482459ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-797000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-797000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000: exit status 7 (32.514209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-797000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000: exit status 7 (29.5185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-501000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-501000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.213301917s)

                                                
                                                
-- stdout --
	* [no-preload-501000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-501000" primary control-plane node in "no-preload-501000" cluster
	* Restarting existing qemu2 VM for "no-preload-501000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-501000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:18:07.895054    6160 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:18:07.895194    6160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:07.895198    6160 out.go:304] Setting ErrFile to fd 2...
	I0802 11:18:07.895201    6160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:07.895327    6160 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:18:07.896568    6160 out.go:298] Setting JSON to false
	I0802 11:18:07.915366    6160 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4651,"bootTime":1722618036,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:18:07.915428    6160 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:18:07.924397    6160 out.go:177] * [no-preload-501000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:18:07.933390    6160 notify.go:220] Checking for updates...
	I0802 11:18:07.938283    6160 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:18:07.946252    6160 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:18:07.954202    6160 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:18:07.962203    6160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:18:07.970287    6160 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:18:07.978262    6160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:18:07.982593    6160 config.go:182] Loaded profile config "no-preload-501000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0802 11:18:07.982890    6160 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:18:07.987228    6160 out.go:177] * Using the qemu2 driver based on existing profile
	I0802 11:18:07.994277    6160 start.go:297] selected driver: qemu2
	I0802 11:18:07.994285    6160 start.go:901] validating driver "qemu2" against &{Name:no-preload-501000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-501000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:18:07.994363    6160 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:18:07.996950    6160 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:18:07.996971    6160 cni.go:84] Creating CNI manager for ""
	I0802 11:18:07.996979    6160 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:18:07.997016    6160 start.go:340] cluster config:
	{Name:no-preload-501000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-501000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:18:08.000971    6160 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:18:08.007268    6160 out.go:177] * Starting "no-preload-501000" primary control-plane node in "no-preload-501000" cluster
	I0802 11:18:08.011236    6160 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0802 11:18:08.011326    6160 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/no-preload-501000/config.json ...
	I0802 11:18:08.011354    6160 cache.go:107] acquiring lock: {Name:mkd24349b9efbfb6b274584840d4e80d923c1e3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:18:08.011350    6160 cache.go:107] acquiring lock: {Name:mk3115db8876b96740ef61c362e182fe6c315e12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:18:08.011434    6160 cache.go:115] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0802 11:18:08.011441    6160 cache.go:115] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0802 11:18:08.011449    6160 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 108.833µs
	I0802 11:18:08.011453    6160 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.125µs
	I0802 11:18:08.011458    6160 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0802 11:18:08.011458    6160 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0802 11:18:08.011465    6160 cache.go:107] acquiring lock: {Name:mk11013d08f2e9e452bdb57fd21f573cf8e14e7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:18:08.011468    6160 cache.go:107] acquiring lock: {Name:mk34561774b0989fa544216a95a5decb104d7537 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:18:08.011473    6160 cache.go:107] acquiring lock: {Name:mk33b3c9740cd09600a19ba2afe63b9b97f0eb3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:18:08.011511    6160 cache.go:115] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0802 11:18:08.011514    6160 cache.go:115] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0802 11:18:08.011516    6160 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 51.625µs
	I0802 11:18:08.011518    6160 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 51.083µs
	I0802 11:18:08.011523    6160 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0802 11:18:08.011520    6160 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0802 11:18:08.011598    6160 cache.go:107] acquiring lock: {Name:mkb1f5789d5d9b46987a3b099b80ac69a9b138f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:18:08.011604    6160 cache.go:107] acquiring lock: {Name:mk0c9d7af24b10d9f8ea5ceb6d4bcf6cd6fb8c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:18:08.011625    6160 cache.go:107] acquiring lock: {Name:mkeaf65f34c288dccd02546bb6ff755bca87710a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:18:08.011667    6160 cache.go:115] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0802 11:18:08.011675    6160 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 101.625µs
	I0802 11:18:08.011679    6160 cache.go:115] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0802 11:18:08.011683    6160 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0802 11:18:08.011686    6160 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 84.333µs
	I0802 11:18:08.011692    6160 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0802 11:18:08.011701    6160 cache.go:115] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0802 11:18:08.011705    6160 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 122.334µs
	I0802 11:18:08.011712    6160 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0802 11:18:08.011712    6160 cache.go:115] /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0802 11:18:08.011716    6160 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 283.292µs
	I0802 11:18:08.011720    6160 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0802 11:18:08.011726    6160 cache.go:87] Successfully saved all images to host disk.
	I0802 11:18:08.011809    6160 start.go:360] acquireMachinesLock for no-preload-501000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:18:08.011839    6160 start.go:364] duration metric: took 24.209µs to acquireMachinesLock for "no-preload-501000"
	I0802 11:18:08.011847    6160 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:18:08.011854    6160 fix.go:54] fixHost starting: 
	I0802 11:18:08.011985    6160 fix.go:112] recreateIfNeeded on no-preload-501000: state=Stopped err=<nil>
	W0802 11:18:08.011993    6160 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:18:08.020239    6160 out.go:177] * Restarting existing qemu2 VM for "no-preload-501000" ...
	I0802 11:18:08.024211    6160 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:18:08.024254    6160 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:86:cf:02:60:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/disk.qcow2
	I0802 11:18:08.026415    6160 main.go:141] libmachine: STDOUT: 
	I0802 11:18:08.026439    6160 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:18:08.026469    6160 fix.go:56] duration metric: took 14.616375ms for fixHost
	I0802 11:18:08.026474    6160 start.go:83] releasing machines lock for "no-preload-501000", held for 14.631541ms
	W0802 11:18:08.026481    6160 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:18:08.026517    6160 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:08.026521    6160 start.go:729] Will try again in 5 seconds ...
	I0802 11:18:13.028567    6160 start.go:360] acquireMachinesLock for no-preload-501000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:18:13.028942    6160 start.go:364] duration metric: took 297.167µs to acquireMachinesLock for "no-preload-501000"
	I0802 11:18:13.029065    6160 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:18:13.029085    6160 fix.go:54] fixHost starting: 
	I0802 11:18:13.029836    6160 fix.go:112] recreateIfNeeded on no-preload-501000: state=Stopped err=<nil>
	W0802 11:18:13.029861    6160 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:18:13.034415    6160 out.go:177] * Restarting existing qemu2 VM for "no-preload-501000" ...
	I0802 11:18:13.038055    6160 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:18:13.038273    6160 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:86:cf:02:60:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/no-preload-501000/disk.qcow2
	I0802 11:18:13.046595    6160 main.go:141] libmachine: STDOUT: 
	I0802 11:18:13.046709    6160 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:18:13.046790    6160 fix.go:56] duration metric: took 17.705583ms for fixHost
	I0802 11:18:13.046818    6160 start.go:83] releasing machines lock for "no-preload-501000", held for 17.855208ms
	W0802 11:18:13.047027    6160 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-501000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-501000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:13.054301    6160 out.go:177] 
	W0802 11:18:13.058318    6160 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:18:13.058350    6160 out.go:239] * 
	* 
	W0802 11:18:13.060447    6160 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:18:13.067216    6160 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-501000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000: exit status 7 (65.331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-797000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-797000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-797000 describe deploy/metrics-server -n kube-system: exit status 1 (29.394208ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-797000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-797000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000: exit status 7 (28.614625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-797000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-797000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.054440917s)

                                                
                                                
-- stdout --
	* [embed-certs-797000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-797000" primary control-plane node in "embed-certs-797000" cluster
	* Restarting existing qemu2 VM for "embed-certs-797000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-797000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:18:10.184353    6189 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:18:10.184501    6189 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:10.184504    6189 out.go:304] Setting ErrFile to fd 2...
	I0802 11:18:10.184507    6189 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:10.184614    6189 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:18:10.185631    6189 out.go:298] Setting JSON to false
	I0802 11:18:10.201582    6189 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4654,"bootTime":1722618036,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:18:10.201646    6189 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:18:10.206456    6189 out.go:177] * [embed-certs-797000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:18:10.213367    6189 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:18:10.213408    6189 notify.go:220] Checking for updates...
	I0802 11:18:10.220316    6189 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:18:10.223374    6189 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:18:10.226408    6189 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:18:10.229407    6189 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:18:10.232374    6189 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:18:10.235695    6189 config.go:182] Loaded profile config "embed-certs-797000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:18:10.235956    6189 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:18:10.240273    6189 out.go:177] * Using the qemu2 driver based on existing profile
	I0802 11:18:10.247348    6189 start.go:297] selected driver: qemu2
	I0802 11:18:10.247354    6189 start.go:901] validating driver "qemu2" against &{Name:embed-certs-797000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:18:10.247404    6189 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:18:10.249600    6189 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:18:10.249639    6189 cni.go:84] Creating CNI manager for ""
	I0802 11:18:10.249648    6189 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:18:10.249670    6189 start.go:340] cluster config:
	{Name:embed-certs-797000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-797000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:18:10.253212    6189 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:18:10.261449    6189 out.go:177] * Starting "embed-certs-797000" primary control-plane node in "embed-certs-797000" cluster
	I0802 11:18:10.264374    6189 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:18:10.264391    6189 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:18:10.264403    6189 cache.go:56] Caching tarball of preloaded images
	I0802 11:18:10.264478    6189 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:18:10.264484    6189 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:18:10.264540    6189 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/embed-certs-797000/config.json ...
	I0802 11:18:10.265012    6189 start.go:360] acquireMachinesLock for embed-certs-797000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:18:10.265039    6189 start.go:364] duration metric: took 21.959µs to acquireMachinesLock for "embed-certs-797000"
	I0802 11:18:10.265047    6189 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:18:10.265053    6189 fix.go:54] fixHost starting: 
	I0802 11:18:10.265164    6189 fix.go:112] recreateIfNeeded on embed-certs-797000: state=Stopped err=<nil>
	W0802 11:18:10.265172    6189 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:18:10.268383    6189 out.go:177] * Restarting existing qemu2 VM for "embed-certs-797000" ...
	I0802 11:18:10.276399    6189 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:18:10.276443    6189 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:06:ac:95:b8:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/disk.qcow2
	I0802 11:18:10.278335    6189 main.go:141] libmachine: STDOUT: 
	I0802 11:18:10.278354    6189 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:18:10.278381    6189 fix.go:56] duration metric: took 13.328167ms for fixHost
	I0802 11:18:10.278386    6189 start.go:83] releasing machines lock for "embed-certs-797000", held for 13.343292ms
	W0802 11:18:10.278393    6189 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:18:10.278434    6189 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:10.278440    6189 start.go:729] Will try again in 5 seconds ...
	I0802 11:18:15.280478    6189 start.go:360] acquireMachinesLock for embed-certs-797000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:18:16.127530    6189 start.go:364] duration metric: took 846.958375ms to acquireMachinesLock for "embed-certs-797000"
	I0802 11:18:16.127702    6189 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:18:16.127723    6189 fix.go:54] fixHost starting: 
	I0802 11:18:16.128466    6189 fix.go:112] recreateIfNeeded on embed-certs-797000: state=Stopped err=<nil>
	W0802 11:18:16.128496    6189 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:18:16.136879    6189 out.go:177] * Restarting existing qemu2 VM for "embed-certs-797000" ...
	I0802 11:18:16.151930    6189 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:18:16.152168    6189 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:06:ac:95:b8:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/embed-certs-797000/disk.qcow2
	I0802 11:18:16.162425    6189 main.go:141] libmachine: STDOUT: 
	I0802 11:18:16.162487    6189 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:18:16.162557    6189 fix.go:56] duration metric: took 34.837041ms for fixHost
	I0802 11:18:16.162577    6189 start.go:83] releasing machines lock for "embed-certs-797000", held for 35.003917ms
	W0802 11:18:16.162765    6189 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-797000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-797000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:16.171914    6189 out.go:177] 
	W0802 11:18:16.176922    6189 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:18:16.176941    6189 out.go:239] * 
	* 
	W0802 11:18:16.178595    6189 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:18:16.194828    6189 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-797000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000: exit status 7 (62.594042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-501000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000: exit status 7 (32.229541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-501000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-501000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-501000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.482166ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-501000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-501000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000: exit status 7 (28.573875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-501000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000: exit status 7 (28.2085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-501000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-501000 --alsologtostderr -v=1: exit status 83 (40.314916ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-501000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-501000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:18:13.330249    6209 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:18:13.330397    6209 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:13.330400    6209 out.go:304] Setting ErrFile to fd 2...
	I0802 11:18:13.330403    6209 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:13.330527    6209 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:18:13.330725    6209 out.go:298] Setting JSON to false
	I0802 11:18:13.330732    6209 mustload.go:65] Loading cluster: no-preload-501000
	I0802 11:18:13.330912    6209 config.go:182] Loaded profile config "no-preload-501000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0802 11:18:13.335196    6209 out.go:177] * The control-plane node no-preload-501000 host is not running: state=Stopped
	I0802 11:18:13.338176    6209 out.go:177]   To start a cluster, run: "minikube start -p no-preload-501000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-501000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000: exit status 7 (28.477334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-501000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000: exit status 7 (27.848ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-171000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-171000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.902114625s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-171000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-171000" primary control-plane node in "default-k8s-diff-port-171000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-171000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:18:13.741225    6233 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:18:13.741585    6233 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:13.741590    6233 out.go:304] Setting ErrFile to fd 2...
	I0802 11:18:13.741593    6233 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:13.741779    6233 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:18:13.743195    6233 out.go:298] Setting JSON to false
	I0802 11:18:13.759521    6233 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4657,"bootTime":1722618036,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:18:13.759587    6233 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:18:13.764083    6233 out.go:177] * [default-k8s-diff-port-171000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:18:13.771188    6233 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:18:13.771242    6233 notify.go:220] Checking for updates...
	I0802 11:18:13.778080    6233 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:18:13.781142    6233 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:18:13.784143    6233 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:18:13.787111    6233 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:18:13.790165    6233 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:18:13.793477    6233 config.go:182] Loaded profile config "embed-certs-797000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:18:13.793535    6233 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:18:13.793600    6233 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:18:13.798079    6233 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:18:13.805070    6233 start.go:297] selected driver: qemu2
	I0802 11:18:13.805076    6233 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:18:13.805083    6233 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:18:13.807418    6233 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 11:18:13.810076    6233 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:18:13.813165    6233 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:18:13.813197    6233 cni.go:84] Creating CNI manager for ""
	I0802 11:18:13.813203    6233 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:18:13.813207    6233 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 11:18:13.813243    6233 start.go:340] cluster config:
	{Name:default-k8s-diff-port-171000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:18:13.817053    6233 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:18:13.825118    6233 out.go:177] * Starting "default-k8s-diff-port-171000" primary control-plane node in "default-k8s-diff-port-171000" cluster
	I0802 11:18:13.829127    6233 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:18:13.829146    6233 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:18:13.829160    6233 cache.go:56] Caching tarball of preloaded images
	I0802 11:18:13.829224    6233 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:18:13.829238    6233 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:18:13.829301    6233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/default-k8s-diff-port-171000/config.json ...
	I0802 11:18:13.829319    6233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/default-k8s-diff-port-171000/config.json: {Name:mk708b035a204f54170e59c19e144faf9d09953a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:18:13.829675    6233 start.go:360] acquireMachinesLock for default-k8s-diff-port-171000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:18:13.829714    6233 start.go:364] duration metric: took 30.542µs to acquireMachinesLock for "default-k8s-diff-port-171000"
	I0802 11:18:13.829725    6233 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:18:13.829760    6233 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:18:13.837935    6233 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 11:18:13.856165    6233 start.go:159] libmachine.API.Create for "default-k8s-diff-port-171000" (driver="qemu2")
	I0802 11:18:13.856188    6233 client.go:168] LocalClient.Create starting
	I0802 11:18:13.856247    6233 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:18:13.856279    6233 main.go:141] libmachine: Decoding PEM data...
	I0802 11:18:13.856291    6233 main.go:141] libmachine: Parsing certificate...
	I0802 11:18:13.856334    6233 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:18:13.856358    6233 main.go:141] libmachine: Decoding PEM data...
	I0802 11:18:13.856365    6233 main.go:141] libmachine: Parsing certificate...
	I0802 11:18:13.856793    6233 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:18:14.009507    6233 main.go:141] libmachine: Creating SSH key...
	I0802 11:18:14.106182    6233 main.go:141] libmachine: Creating Disk image...
	I0802 11:18:14.106188    6233 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:18:14.106378    6233 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/disk.qcow2
	I0802 11:18:14.115459    6233 main.go:141] libmachine: STDOUT: 
	I0802 11:18:14.115477    6233 main.go:141] libmachine: STDERR: 
	I0802 11:18:14.115527    6233 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/disk.qcow2 +20000M
	I0802 11:18:14.123482    6233 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:18:14.123503    6233 main.go:141] libmachine: STDERR: 
	I0802 11:18:14.123517    6233 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/disk.qcow2
	I0802 11:18:14.123523    6233 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:18:14.123538    6233 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:18:14.123567    6233 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:41:c7:09:47:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/disk.qcow2
	I0802 11:18:14.125162    6233 main.go:141] libmachine: STDOUT: 
	I0802 11:18:14.125197    6233 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:18:14.125217    6233 client.go:171] duration metric: took 269.034917ms to LocalClient.Create
	I0802 11:18:16.127317    6233 start.go:128] duration metric: took 2.297624334s to createHost
	I0802 11:18:16.127382    6233 start.go:83] releasing machines lock for "default-k8s-diff-port-171000", held for 2.297744042s
	W0802 11:18:16.127490    6233 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:16.147881    6233 out.go:177] * Deleting "default-k8s-diff-port-171000" in qemu2 ...
	W0802 11:18:16.207329    6233 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:16.207369    6233 start.go:729] Will try again in 5 seconds ...
	I0802 11:18:21.209448    6233 start.go:360] acquireMachinesLock for default-k8s-diff-port-171000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:18:21.209880    6233 start.go:364] duration metric: took 357.25µs to acquireMachinesLock for "default-k8s-diff-port-171000"
	I0802 11:18:21.210021    6233 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:18:21.210303    6233 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:18:21.218793    6233 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 11:18:21.266428    6233 start.go:159] libmachine.API.Create for "default-k8s-diff-port-171000" (driver="qemu2")
	I0802 11:18:21.266476    6233 client.go:168] LocalClient.Create starting
	I0802 11:18:21.266597    6233 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:18:21.266670    6233 main.go:141] libmachine: Decoding PEM data...
	I0802 11:18:21.266688    6233 main.go:141] libmachine: Parsing certificate...
	I0802 11:18:21.266749    6233 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:18:21.266795    6233 main.go:141] libmachine: Decoding PEM data...
	I0802 11:18:21.266807    6233 main.go:141] libmachine: Parsing certificate...
	I0802 11:18:21.270315    6233 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:18:21.436118    6233 main.go:141] libmachine: Creating SSH key...
	I0802 11:18:21.563338    6233 main.go:141] libmachine: Creating Disk image...
	I0802 11:18:21.563344    6233 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:18:21.563537    6233 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/disk.qcow2
	I0802 11:18:21.572502    6233 main.go:141] libmachine: STDOUT: 
	I0802 11:18:21.572520    6233 main.go:141] libmachine: STDERR: 
	I0802 11:18:21.572572    6233 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/disk.qcow2 +20000M
	I0802 11:18:21.580414    6233 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:18:21.580428    6233 main.go:141] libmachine: STDERR: 
	I0802 11:18:21.580439    6233 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/disk.qcow2
	I0802 11:18:21.580442    6233 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:18:21.580454    6233 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:18:21.580483    6233 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:a5:b2:e7:9c:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/disk.qcow2
	I0802 11:18:21.582061    6233 main.go:141] libmachine: STDOUT: 
	I0802 11:18:21.582078    6233 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:18:21.582090    6233 client.go:171] duration metric: took 315.62075ms to LocalClient.Create
	I0802 11:18:23.584181    6233 start.go:128] duration metric: took 2.3738855s to createHost
	I0802 11:18:23.584204    6233 start.go:83] releasing machines lock for "default-k8s-diff-port-171000", held for 2.374394s
	W0802 11:18:23.584289    6233 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-171000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-171000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:23.592461    6233 out.go:177] 
	W0802 11:18:23.600501    6233 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:18:23.600507    6233 out.go:239] * 
	* 
	W0802 11:18:23.601107    6233 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:18:23.608254    6233 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-171000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000: exit status 7 (32.65625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-171000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-797000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000: exit status 7 (32.04125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-797000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-797000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-797000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.67275ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-797000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-797000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000: exit status 7 (28.656958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-797000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000: exit status 7 (28.256708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-797000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-797000 --alsologtostderr -v=1: exit status 83 (40.125791ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-797000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-797000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:18:16.458414    6255 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:18:16.458577    6255 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:16.458581    6255 out.go:304] Setting ErrFile to fd 2...
	I0802 11:18:16.458583    6255 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:16.458709    6255 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:18:16.458928    6255 out.go:298] Setting JSON to false
	I0802 11:18:16.458934    6255 mustload.go:65] Loading cluster: embed-certs-797000
	I0802 11:18:16.459127    6255 config.go:182] Loaded profile config "embed-certs-797000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:18:16.462789    6255 out.go:177] * The control-plane node embed-certs-797000 host is not running: state=Stopped
	I0802 11:18:16.467695    6255 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-797000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-797000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000: exit status 7 (28.499208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-797000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000: exit status 7 (28.760083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-671000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-671000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (9.837633875s)

                                                
                                                
-- stdout --
	* [newest-cni-671000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-671000" primary control-plane node in "newest-cni-671000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-671000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:18:16.762487    6272 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:18:16.762626    6272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:16.762629    6272 out.go:304] Setting ErrFile to fd 2...
	I0802 11:18:16.762631    6272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:16.762766    6272 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:18:16.763837    6272 out.go:298] Setting JSON to false
	I0802 11:18:16.779927    6272 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4660,"bootTime":1722618036,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:18:16.779986    6272 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:18:16.784902    6272 out.go:177] * [newest-cni-671000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:18:16.791762    6272 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:18:16.791831    6272 notify.go:220] Checking for updates...
	I0802 11:18:16.797696    6272 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:18:16.800695    6272 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:18:16.803744    6272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:18:16.806690    6272 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:18:16.809769    6272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:18:16.813056    6272 config.go:182] Loaded profile config "default-k8s-diff-port-171000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:18:16.813122    6272 config.go:182] Loaded profile config "multinode-325000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:18:16.813181    6272 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:18:16.817677    6272 out.go:177] * Using the qemu2 driver based on user configuration
	I0802 11:18:16.824787    6272 start.go:297] selected driver: qemu2
	I0802 11:18:16.824793    6272 start.go:901] validating driver "qemu2" against <nil>
	I0802 11:18:16.824802    6272 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:18:16.827099    6272 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0802 11:18:16.827119    6272 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0802 11:18:16.834675    6272 out.go:177] * Automatically selected the socket_vmnet network
	I0802 11:18:16.837791    6272 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0802 11:18:16.837804    6272 cni.go:84] Creating CNI manager for ""
	I0802 11:18:16.837810    6272 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:18:16.837814    6272 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 11:18:16.837839    6272 start.go:340] cluster config:
	{Name:newest-cni-671000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:18:16.841655    6272 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:18:16.849740    6272 out.go:177] * Starting "newest-cni-671000" primary control-plane node in "newest-cni-671000" cluster
	I0802 11:18:16.853685    6272 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0802 11:18:16.853702    6272 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0802 11:18:16.853714    6272 cache.go:56] Caching tarball of preloaded images
	I0802 11:18:16.853772    6272 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:18:16.853778    6272 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0802 11:18:16.853847    6272 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/newest-cni-671000/config.json ...
	I0802 11:18:16.853859    6272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/newest-cni-671000/config.json: {Name:mka7b276f6203607c6de42a3bbde6d8dfcf4ac87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 11:18:16.854088    6272 start.go:360] acquireMachinesLock for newest-cni-671000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:18:16.854122    6272 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "newest-cni-671000"
	I0802 11:18:16.854133    6272 start.go:93] Provisioning new machine with config: &{Name:newest-cni-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:18:16.854164    6272 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:18:16.861548    6272 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 11:18:16.879518    6272 start.go:159] libmachine.API.Create for "newest-cni-671000" (driver="qemu2")
	I0802 11:18:16.879549    6272 client.go:168] LocalClient.Create starting
	I0802 11:18:16.879619    6272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:18:16.879649    6272 main.go:141] libmachine: Decoding PEM data...
	I0802 11:18:16.879662    6272 main.go:141] libmachine: Parsing certificate...
	I0802 11:18:16.879700    6272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:18:16.879724    6272 main.go:141] libmachine: Decoding PEM data...
	I0802 11:18:16.879732    6272 main.go:141] libmachine: Parsing certificate...
	I0802 11:18:16.880092    6272 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:18:17.048080    6272 main.go:141] libmachine: Creating SSH key...
	I0802 11:18:17.101072    6272 main.go:141] libmachine: Creating Disk image...
	I0802 11:18:17.101077    6272 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:18:17.101274    6272 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/disk.qcow2
	I0802 11:18:17.110412    6272 main.go:141] libmachine: STDOUT: 
	I0802 11:18:17.110437    6272 main.go:141] libmachine: STDERR: 
	I0802 11:18:17.110487    6272 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/disk.qcow2 +20000M
	I0802 11:18:17.118220    6272 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:18:17.118234    6272 main.go:141] libmachine: STDERR: 
	I0802 11:18:17.118247    6272 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/disk.qcow2
	I0802 11:18:17.118251    6272 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:18:17.118264    6272 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:18:17.118288    6272 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:4c:1e:e0:fc:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/disk.qcow2
	I0802 11:18:17.119866    6272 main.go:141] libmachine: STDOUT: 
	I0802 11:18:17.119882    6272 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:18:17.119899    6272 client.go:171] duration metric: took 240.35425ms to LocalClient.Create
	I0802 11:18:19.122066    6272 start.go:128] duration metric: took 2.267964875s to createHost
	I0802 11:18:19.122130    6272 start.go:83] releasing machines lock for "newest-cni-671000", held for 2.26808375s
	W0802 11:18:19.122189    6272 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:19.133154    6272 out.go:177] * Deleting "newest-cni-671000" in qemu2 ...
	W0802 11:18:19.172023    6272 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:19.172054    6272 start.go:729] Will try again in 5 seconds ...
	I0802 11:18:24.174117    6272 start.go:360] acquireMachinesLock for newest-cni-671000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:18:24.174502    6272 start.go:364] duration metric: took 290.042µs to acquireMachinesLock for "newest-cni-671000"
	I0802 11:18:24.174683    6272 start.go:93] Provisioning new machine with config: &{Name:newest-cni-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 11:18:24.175008    6272 start.go:125] createHost starting for "" (driver="qemu2")
	I0802 11:18:24.184672    6272 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 11:18:24.235744    6272 start.go:159] libmachine.API.Create for "newest-cni-671000" (driver="qemu2")
	I0802 11:18:24.235801    6272 client.go:168] LocalClient.Create starting
	I0802 11:18:24.235900    6272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/ca.pem
	I0802 11:18:24.235945    6272 main.go:141] libmachine: Decoding PEM data...
	I0802 11:18:24.235963    6272 main.go:141] libmachine: Parsing certificate...
	I0802 11:18:24.236040    6272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-1243/.minikube/certs/cert.pem
	I0802 11:18:24.236069    6272 main.go:141] libmachine: Decoding PEM data...
	I0802 11:18:24.236080    6272 main.go:141] libmachine: Parsing certificate...
	I0802 11:18:24.236618    6272 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0802 11:18:24.425358    6272 main.go:141] libmachine: Creating SSH key...
	I0802 11:18:24.504470    6272 main.go:141] libmachine: Creating Disk image...
	I0802 11:18:24.504476    6272 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0802 11:18:24.504672    6272 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/disk.qcow2
	I0802 11:18:24.514162    6272 main.go:141] libmachine: STDOUT: 
	I0802 11:18:24.514179    6272 main.go:141] libmachine: STDERR: 
	I0802 11:18:24.514242    6272 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/disk.qcow2 +20000M
	I0802 11:18:24.522092    6272 main.go:141] libmachine: STDOUT: Image resized.
	
	I0802 11:18:24.522107    6272 main.go:141] libmachine: STDERR: 
	I0802 11:18:24.522117    6272 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/disk.qcow2
	I0802 11:18:24.522125    6272 main.go:141] libmachine: Starting QEMU VM...
	I0802 11:18:24.522136    6272 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:18:24.522171    6272 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:28:ae:c3:4d:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/disk.qcow2
	I0802 11:18:24.523840    6272 main.go:141] libmachine: STDOUT: 
	I0802 11:18:24.523858    6272 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:18:24.523872    6272 client.go:171] duration metric: took 288.076959ms to LocalClient.Create
	I0802 11:18:26.525997    6272 start.go:128] duration metric: took 2.351045792s to createHost
	I0802 11:18:26.526080    6272 start.go:83] releasing machines lock for "newest-cni-671000", held for 2.351605125s
	W0802 11:18:26.526432    6272 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-671000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-671000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:26.534971    6272 out.go:177] 
	W0802 11:18:26.546134    6272 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:18:26.546175    6272 out.go:239] * 
	* 
	W0802 11:18:26.548668    6272 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:18:26.560025    6272 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-671000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-671000 -n newest-cni-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-671000 -n newest-cni-671000: exit status 7 (63.633208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-171000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-171000 create -f testdata/busybox.yaml: exit status 1 (26.551042ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-171000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-171000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000: exit status 7 (29.050542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-171000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000: exit status 7 (28.71025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-171000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-171000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-171000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-171000 describe deploy/metrics-server -n kube-system: exit status 1 (26.66825ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-171000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-171000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000: exit status 7 (29.116083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-171000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-171000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-171000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.671622667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-171000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-171000" primary control-plane node in "default-k8s-diff-port-171000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-171000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-171000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:18:25.975649    6317 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:18:25.975766    6317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:25.975769    6317 out.go:304] Setting ErrFile to fd 2...
	I0802 11:18:25.975772    6317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:25.975895    6317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:18:25.976882    6317 out.go:298] Setting JSON to false
	I0802 11:18:25.992848    6317 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4669,"bootTime":1722618036,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:18:25.992915    6317 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:18:25.998153    6317 out.go:177] * [default-k8s-diff-port-171000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:18:26.005019    6317 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:18:26.005045    6317 notify.go:220] Checking for updates...
	I0802 11:18:26.012021    6317 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:18:26.015084    6317 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:18:26.018054    6317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:18:26.019409    6317 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:18:26.022069    6317 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:18:26.025361    6317 config.go:182] Loaded profile config "default-k8s-diff-port-171000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:18:26.025654    6317 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:18:26.029894    6317 out.go:177] * Using the qemu2 driver based on existing profile
	I0802 11:18:26.037079    6317 start.go:297] selected driver: qemu2
	I0802 11:18:26.037083    6317 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:18:26.037128    6317 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:18:26.039151    6317 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 11:18:26.039201    6317 cni.go:84] Creating CNI manager for ""
	I0802 11:18:26.039208    6317 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:18:26.039227    6317 start.go:340] cluster config:
	{Name:default-k8s-diff-port-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-171000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:18:26.042637    6317 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:18:26.050044    6317 out.go:177] * Starting "default-k8s-diff-port-171000" primary control-plane node in "default-k8s-diff-port-171000" cluster
	I0802 11:18:26.054080    6317 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 11:18:26.054097    6317 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 11:18:26.054108    6317 cache.go:56] Caching tarball of preloaded images
	I0802 11:18:26.054175    6317 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:18:26.054181    6317 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 11:18:26.054256    6317 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/default-k8s-diff-port-171000/config.json ...
	I0802 11:18:26.054685    6317 start.go:360] acquireMachinesLock for default-k8s-diff-port-171000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:18:26.526186    6317 start.go:364] duration metric: took 471.4965ms to acquireMachinesLock for "default-k8s-diff-port-171000"
	I0802 11:18:26.526355    6317 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:18:26.526417    6317 fix.go:54] fixHost starting: 
	I0802 11:18:26.527108    6317 fix.go:112] recreateIfNeeded on default-k8s-diff-port-171000: state=Stopped err=<nil>
	W0802 11:18:26.527172    6317 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:18:26.543078    6317 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-171000" ...
	I0802 11:18:26.550080    6317 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:18:26.550271    6317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:a5:b2:e7:9c:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/disk.qcow2
	I0802 11:18:26.560117    6317 main.go:141] libmachine: STDOUT: 
	I0802 11:18:26.560215    6317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:18:26.560390    6317 fix.go:56] duration metric: took 33.9755ms for fixHost
	I0802 11:18:26.560421    6317 start.go:83] releasing machines lock for "default-k8s-diff-port-171000", held for 34.203042ms
	W0802 11:18:26.560460    6317 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:18:26.560659    6317 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:26.560688    6317 start.go:729] Will try again in 5 seconds ...
	I0802 11:18:31.561302    6317 start.go:360] acquireMachinesLock for default-k8s-diff-port-171000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:18:31.561784    6317 start.go:364] duration metric: took 320.083µs to acquireMachinesLock for "default-k8s-diff-port-171000"
	I0802 11:18:31.561904    6317 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:18:31.561924    6317 fix.go:54] fixHost starting: 
	I0802 11:18:31.562658    6317 fix.go:112] recreateIfNeeded on default-k8s-diff-port-171000: state=Stopped err=<nil>
	W0802 11:18:31.562685    6317 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:18:31.568230    6317 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-171000" ...
	I0802 11:18:31.576236    6317 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:18:31.576547    6317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:a5:b2:e7:9c:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/default-k8s-diff-port-171000/disk.qcow2
	I0802 11:18:31.585512    6317 main.go:141] libmachine: STDOUT: 
	I0802 11:18:31.585569    6317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:18:31.585642    6317 fix.go:56] duration metric: took 23.72175ms for fixHost
	I0802 11:18:31.585656    6317 start.go:83] releasing machines lock for "default-k8s-diff-port-171000", held for 23.85025ms
	W0802 11:18:31.585807    6317 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-171000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-171000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:31.593190    6317 out.go:177] 
	W0802 11:18:31.596305    6317 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:18:31.596338    6317 out.go:239] * 
	* 
	W0802 11:18:31.599010    6317 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:18:31.606131    6317 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-171000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000: exit status 7 (63.209375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-171000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-671000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-671000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.181682084s)

                                                
                                                
-- stdout --
	* [newest-cni-671000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-671000" primary control-plane node in "newest-cni-671000" cluster
	* Restarting existing qemu2 VM for "newest-cni-671000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-671000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:18:30.162744    6352 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:18:30.162873    6352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:30.162876    6352 out.go:304] Setting ErrFile to fd 2...
	I0802 11:18:30.162878    6352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:30.163017    6352 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:18:30.164060    6352 out.go:298] Setting JSON to false
	I0802 11:18:30.180024    6352 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4674,"bootTime":1722618036,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 11:18:30.180091    6352 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 11:18:30.185613    6352 out.go:177] * [newest-cni-671000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 11:18:30.192612    6352 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 11:18:30.192654    6352 notify.go:220] Checking for updates...
	I0802 11:18:30.200527    6352 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 11:18:30.204557    6352 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 11:18:30.205830    6352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 11:18:30.208515    6352 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 11:18:30.211574    6352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 11:18:30.214856    6352 config.go:182] Loaded profile config "newest-cni-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0802 11:18:30.215121    6352 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 11:18:30.219465    6352 out.go:177] * Using the qemu2 driver based on existing profile
	I0802 11:18:30.226553    6352 start.go:297] selected driver: qemu2
	I0802 11:18:30.226561    6352 start.go:901] validating driver "qemu2" against &{Name:newest-cni-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:18:30.226624    6352 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 11:18:30.228857    6352 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0802 11:18:30.228894    6352 cni.go:84] Creating CNI manager for ""
	I0802 11:18:30.228905    6352 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 11:18:30.228928    6352 start.go:340] cluster config:
	{Name:newest-cni-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-671000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 11:18:30.232461    6352 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 11:18:30.239553    6352 out.go:177] * Starting "newest-cni-671000" primary control-plane node in "newest-cni-671000" cluster
	I0802 11:18:30.243482    6352 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0802 11:18:30.243497    6352 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0802 11:18:30.243510    6352 cache.go:56] Caching tarball of preloaded images
	I0802 11:18:30.243577    6352 preload.go:172] Found /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0802 11:18:30.243583    6352 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0802 11:18:30.243640    6352 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/newest-cni-671000/config.json ...
	I0802 11:18:30.244043    6352 start.go:360] acquireMachinesLock for newest-cni-671000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:18:30.244079    6352 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "newest-cni-671000"
	I0802 11:18:30.244088    6352 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:18:30.244094    6352 fix.go:54] fixHost starting: 
	I0802 11:18:30.244200    6352 fix.go:112] recreateIfNeeded on newest-cni-671000: state=Stopped err=<nil>
	W0802 11:18:30.244209    6352 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:18:30.252470    6352 out.go:177] * Restarting existing qemu2 VM for "newest-cni-671000" ...
	I0802 11:18:30.256536    6352 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:18:30.256581    6352 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:28:ae:c3:4d:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/disk.qcow2
	I0802 11:18:30.258509    6352 main.go:141] libmachine: STDOUT: 
	I0802 11:18:30.258528    6352 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:18:30.258557    6352 fix.go:56] duration metric: took 14.463208ms for fixHost
	I0802 11:18:30.258562    6352 start.go:83] releasing machines lock for "newest-cni-671000", held for 14.478792ms
	W0802 11:18:30.258569    6352 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:18:30.258606    6352 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:30.258611    6352 start.go:729] Will try again in 5 seconds ...
	I0802 11:18:35.260678    6352 start.go:360] acquireMachinesLock for newest-cni-671000: {Name:mk39bb02d7c82a0545bf44da0ba943d3e2e804e5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 11:18:35.261117    6352 start.go:364] duration metric: took 354.208µs to acquireMachinesLock for "newest-cni-671000"
	I0802 11:18:35.261255    6352 start.go:96] Skipping create...Using existing machine configuration
	I0802 11:18:35.261276    6352 fix.go:54] fixHost starting: 
	I0802 11:18:35.262056    6352 fix.go:112] recreateIfNeeded on newest-cni-671000: state=Stopped err=<nil>
	W0802 11:18:35.262085    6352 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 11:18:35.270506    6352 out.go:177] * Restarting existing qemu2 VM for "newest-cni-671000" ...
	I0802 11:18:35.273545    6352 qemu.go:418] Using hvf for hardware acceleration
	I0802 11:18:35.273772    6352 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:28:ae:c3:4d:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-1243/.minikube/machines/newest-cni-671000/disk.qcow2
	I0802 11:18:35.283707    6352 main.go:141] libmachine: STDOUT: 
	I0802 11:18:35.283769    6352 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0802 11:18:35.283841    6352 fix.go:56] duration metric: took 22.569292ms for fixHost
	I0802 11:18:35.283901    6352 start.go:83] releasing machines lock for "newest-cni-671000", held for 22.719542ms
	W0802 11:18:35.284083    6352 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-671000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-671000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0802 11:18:35.291486    6352 out.go:177] 
	W0802 11:18:35.294618    6352 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0802 11:18:35.294643    6352 out.go:239] * 
	* 
	W0802 11:18:35.297348    6352 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 11:18:35.308537    6352 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-671000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-671000 -n newest-cni-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-671000 -n newest-cni-671000: exit status 7 (68.799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-171000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000: exit status 7 (31.757958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-171000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-171000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-171000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-171000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.674084ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-171000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-171000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000: exit status 7 (28.500583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-171000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-171000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000: exit status 7 (28.811584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-171000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-171000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-171000 --alsologtostderr -v=1: exit status 83 (39.941459ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-171000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-171000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:18:31.866856    6371 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:18:31.867010    6371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:31.867013    6371 out.go:304] Setting ErrFile to fd 2...
	I0802 11:18:31.867015    6371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:31.867134    6371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:18:31.867347    6371 out.go:298] Setting JSON to false
	I0802 11:18:31.867352    6371 mustload.go:65] Loading cluster: default-k8s-diff-port-171000
	I0802 11:18:31.867528    6371 config.go:182] Loaded profile config "default-k8s-diff-port-171000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 11:18:31.872062    6371 out.go:177] * The control-plane node default-k8s-diff-port-171000 host is not running: state=Stopped
	I0802 11:18:31.875775    6371 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-171000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-171000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000: exit status 7 (27.843958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-171000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000: exit status 7 (27.94825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-171000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-671000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-671000 -n newest-cni-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-671000 -n newest-cni-671000: exit status 7 (30.1785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-671000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-671000 --alsologtostderr -v=1: exit status 83 (41.2645ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-671000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-671000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 11:18:35.490396    6395 out.go:291] Setting OutFile to fd 1 ...
	I0802 11:18:35.490533    6395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:35.490536    6395 out.go:304] Setting ErrFile to fd 2...
	I0802 11:18:35.490538    6395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 11:18:35.490688    6395 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 11:18:35.490918    6395 out.go:298] Setting JSON to false
	I0802 11:18:35.490923    6395 mustload.go:65] Loading cluster: newest-cni-671000
	I0802 11:18:35.491137    6395 config.go:182] Loaded profile config "newest-cni-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0802 11:18:35.495307    6395 out.go:177] * The control-plane node newest-cni-671000 host is not running: state=Stopped
	I0802 11:18:35.499219    6395 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-671000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-671000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-671000 -n newest-cni-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-671000 -n newest-cni-671000: exit status 7 (28.878791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-671000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-671000 -n newest-cni-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-671000 -n newest-cni-671000: exit status 7 (29.352958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (162/282)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 7.79
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-rc.0/json-events 21.77
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.1
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.28
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 257.86
38 TestAddons/serial/Volcano 38.03
40 TestAddons/serial/GCPAuth/Namespaces 0.07
42 TestAddons/parallel/Registry 13.01
43 TestAddons/parallel/Ingress 18.33
44 TestAddons/parallel/InspektorGadget 10.22
45 TestAddons/parallel/MetricsServer 5.24
48 TestAddons/parallel/CSI 52.47
49 TestAddons/parallel/Headlamp 16.53
50 TestAddons/parallel/CloudSpanner 5.17
51 TestAddons/parallel/LocalPath 40.8
52 TestAddons/parallel/NvidiaDevicePlugin 5.15
53 TestAddons/parallel/Yakd 10.2
54 TestAddons/StoppedEnableDisable 12.37
62 TestHyperKitDriverInstallOrUpdate 11.08
65 TestErrorSpam/setup 36.24
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.24
68 TestErrorSpam/pause 0.64
69 TestErrorSpam/unpause 0.58
70 TestErrorSpam/stop 64.29
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 48.92
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 35.95
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.05
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.5
82 TestFunctional/serial/CacheCmd/cache/add_local 1.11
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.03
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.61
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.75
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.96
90 TestFunctional/serial/ExtraConfig 36.26
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.65
93 TestFunctional/serial/LogsFileCmd 0.62
94 TestFunctional/serial/InvalidService 4.09
96 TestFunctional/parallel/ConfigCmd 0.22
97 TestFunctional/parallel/DashboardCmd 6.53
98 TestFunctional/parallel/DryRun 0.24
99 TestFunctional/parallel/InternationalLanguage 0.11
100 TestFunctional/parallel/StatusCmd 0.24
105 TestFunctional/parallel/AddonsCmd 0.09
106 TestFunctional/parallel/PersistentVolumeClaim 25.48
108 TestFunctional/parallel/SSHCmd 0.13
109 TestFunctional/parallel/CpCmd 0.45
111 TestFunctional/parallel/FileSync 0.07
112 TestFunctional/parallel/CertSync 0.4
116 TestFunctional/parallel/NodeLabels 0.04
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
120 TestFunctional/parallel/License 0.21
121 TestFunctional/parallel/Version/short 0.04
122 TestFunctional/parallel/Version/components 0.2
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.09
127 TestFunctional/parallel/ImageCommands/ImageBuild 1.61
128 TestFunctional/parallel/ImageCommands/Setup 1.77
129 TestFunctional/parallel/DockerEnv/bash 0.28
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
133 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.48
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.15
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.25
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.18
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.99
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
146 TestFunctional/parallel/ServiceCmd/List 0.09
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
149 TestFunctional/parallel/ServiceCmd/Format 0.09
150 TestFunctional/parallel/ServiceCmd/URL 0.1
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
158 TestFunctional/parallel/ProfileCmd/profile_list 0.12
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
160 TestFunctional/parallel/MountCmd/any-port 7.39
161 TestFunctional/parallel/MountCmd/specific-port 0.74
162 TestFunctional/parallel/MountCmd/VerifyCleanup 0.88
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 193.78
170 TestMultiControlPlane/serial/DeployApp 5.33
171 TestMultiControlPlane/serial/PingHostFromPods 0.76
172 TestMultiControlPlane/serial/AddWorkerNode 55.2
173 TestMultiControlPlane/serial/NodeLabels 0.15
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.26
175 TestMultiControlPlane/serial/CopyFile 4.52
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 78.55
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 2.89
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.2
221 TestMainNoArgs 0.03
268 TestStoppedBinaryUpgrade/Setup 0.89
280 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
285 TestNoKubernetes/serial/ProfileList 31.46
286 TestNoKubernetes/serial/Stop 3.09
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
297 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
303 TestStartStop/group/old-k8s-version/serial/Stop 3.36
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
316 TestStartStop/group/no-preload/serial/Stop 3.63
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
321 TestStartStop/group/embed-certs/serial/Stop 1.98
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.98
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
343 TestStartStop/group/newest-cni/serial/Stop 3.32
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-200000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-200000: exit status 85 (89.402625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-200000 | jenkins | v1.33.1 | 02 Aug 24 10:25 PDT |          |
	|         | -p download-only-200000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 10:25:28
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 10:25:28.708663    1749 out.go:291] Setting OutFile to fd 1 ...
	I0802 10:25:28.708818    1749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:25:28.708821    1749 out.go:304] Setting ErrFile to fd 2...
	I0802 10:25:28.708824    1749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:25:28.708980    1749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	W0802 10:25:28.709076    1749 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19355-1243/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19355-1243/.minikube/config/config.json: no such file or directory
	I0802 10:25:28.710348    1749 out.go:298] Setting JSON to true
	I0802 10:25:28.727547    1749 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1491,"bootTime":1722618037,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 10:25:28.727662    1749 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 10:25:28.733227    1749 out.go:97] [download-only-200000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 10:25:28.733391    1749 notify.go:220] Checking for updates...
	W0802 10:25:28.733409    1749 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball: no such file or directory
	I0802 10:25:28.737262    1749 out.go:169] MINIKUBE_LOCATION=19355
	I0802 10:25:28.740292    1749 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 10:25:28.749227    1749 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 10:25:28.757293    1749 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 10:25:28.761254    1749 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	W0802 10:25:28.767301    1749 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0802 10:25:28.767623    1749 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 10:25:28.772320    1749 out.go:97] Using the qemu2 driver based on user configuration
	I0802 10:25:28.772340    1749 start.go:297] selected driver: qemu2
	I0802 10:25:28.772344    1749 start.go:901] validating driver "qemu2" against <nil>
	I0802 10:25:28.772427    1749 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 10:25:28.775218    1749 out.go:169] Automatically selected the socket_vmnet network
	I0802 10:25:28.781111    1749 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0802 10:25:28.781241    1749 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0802 10:25:28.781308    1749 cni.go:84] Creating CNI manager for ""
	I0802 10:25:28.781325    1749 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0802 10:25:28.781384    1749 start.go:340] cluster config:
	{Name:download-only-200000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-200000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 10:25:28.786889    1749 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 10:25:28.790359    1749 out.go:97] Downloading VM boot image ...
	I0802 10:25:28.790378    1749 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso
	I0802 10:25:36.013335    1749 out.go:97] Starting "download-only-200000" primary control-plane node in "download-only-200000" cluster
	I0802 10:25:36.013353    1749 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0802 10:25:36.068787    1749 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0802 10:25:36.068794    1749 cache.go:56] Caching tarball of preloaded images
	I0802 10:25:36.068926    1749 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0802 10:25:36.073024    1749 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0802 10:25:36.073031    1749 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0802 10:25:36.155818    1749 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0802 10:25:42.809293    1749 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0802 10:25:42.809446    1749 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0802 10:25:43.520946    1749 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0802 10:25:43.521153    1749 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/download-only-200000/config.json ...
	I0802 10:25:43.521172    1749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/download-only-200000/config.json: {Name:mk700a421512df1c0b5a01439a4728ae848a7259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 10:25:43.521424    1749 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0802 10:25:43.521626    1749 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0802 10:25:43.919608    1749 out.go:169] 
	W0802 10:25:43.923637    1749 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19355-1243/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104449a80 0x104449a80 0x104449a80 0x104449a80 0x104449a80 0x104449a80 0x104449a80] Decompressors:map[bz2:0x14000511c90 gz:0x14000511c98 tar:0x14000511c00 tar.bz2:0x14000511c20 tar.gz:0x14000511c50 tar.xz:0x14000511c60 tar.zst:0x14000511c80 tbz2:0x14000511c20 tgz:0x14000511c50 txz:0x14000511c60 tzst:0x14000511c80 xz:0x14000511ca0 zip:0x14000511cb0 zst:0x14000511ca8] Getters:map[file:0x140002aea80 http:0x140004e6230 https:0x140004e63c0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0802 10:25:43.923663    1749 out_reason.go:110] 
	W0802 10:25:43.932743    1749 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 10:25:43.936601    1749 out.go:169] 
	
	
	* The control-plane node download-only-200000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-200000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-200000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (7.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-963000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-963000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (7.794103s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (7.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-963000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-963000: exit status 85 (80.879667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-200000 | jenkins | v1.33.1 | 02 Aug 24 10:25 PDT |                     |
	|         | -p download-only-200000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 02 Aug 24 10:25 PDT | 02 Aug 24 10:25 PDT |
	| delete  | -p download-only-200000        | download-only-200000 | jenkins | v1.33.1 | 02 Aug 24 10:25 PDT | 02 Aug 24 10:25 PDT |
	| start   | -o=json --download-only        | download-only-963000 | jenkins | v1.33.1 | 02 Aug 24 10:25 PDT |                     |
	|         | -p download-only-963000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 10:25:44
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 10:25:44.344804    1777 out.go:291] Setting OutFile to fd 1 ...
	I0802 10:25:44.344949    1777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:25:44.344952    1777 out.go:304] Setting ErrFile to fd 2...
	I0802 10:25:44.344954    1777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:25:44.345107    1777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 10:25:44.346153    1777 out.go:298] Setting JSON to true
	I0802 10:25:44.362240    1777 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1507,"bootTime":1722618037,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 10:25:44.362316    1777 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 10:25:44.367191    1777 out.go:97] [download-only-963000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 10:25:44.367286    1777 notify.go:220] Checking for updates...
	I0802 10:25:44.369999    1777 out.go:169] MINIKUBE_LOCATION=19355
	I0802 10:25:44.373118    1777 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 10:25:44.377096    1777 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 10:25:44.380123    1777 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 10:25:44.383091    1777 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	W0802 10:25:44.387378    1777 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0802 10:25:44.387597    1777 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 10:25:44.391010    1777 out.go:97] Using the qemu2 driver based on user configuration
	I0802 10:25:44.391019    1777 start.go:297] selected driver: qemu2
	I0802 10:25:44.391022    1777 start.go:901] validating driver "qemu2" against <nil>
	I0802 10:25:44.391070    1777 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 10:25:44.394033    1777 out.go:169] Automatically selected the socket_vmnet network
	I0802 10:25:44.399222    1777 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0802 10:25:44.399317    1777 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0802 10:25:44.399354    1777 cni.go:84] Creating CNI manager for ""
	I0802 10:25:44.399365    1777 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 10:25:44.399372    1777 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 10:25:44.399406    1777 start.go:340] cluster config:
	{Name:download-only-963000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 10:25:44.402680    1777 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 10:25:44.406121    1777 out.go:97] Starting "download-only-963000" primary control-plane node in "download-only-963000" cluster
	I0802 10:25:44.406130    1777 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 10:25:44.466309    1777 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0802 10:25:44.466320    1777 cache.go:56] Caching tarball of preloaded images
	I0802 10:25:44.466473    1777 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 10:25:44.471590    1777 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0802 10:25:44.471598    1777 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0802 10:25:44.552880    1777 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-963000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-963000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-963000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (21.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-319000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-319000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 : (21.765252417s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (21.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-319000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-319000: exit status 85 (76.255834ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-200000 | jenkins | v1.33.1 | 02 Aug 24 10:25 PDT |                     |
	|         | -p download-only-200000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 02 Aug 24 10:25 PDT | 02 Aug 24 10:25 PDT |
	| delete  | -p download-only-200000           | download-only-200000 | jenkins | v1.33.1 | 02 Aug 24 10:25 PDT | 02 Aug 24 10:25 PDT |
	| start   | -o=json --download-only           | download-only-963000 | jenkins | v1.33.1 | 02 Aug 24 10:25 PDT |                     |
	|         | -p download-only-963000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 02 Aug 24 10:25 PDT | 02 Aug 24 10:25 PDT |
	| delete  | -p download-only-963000           | download-only-963000 | jenkins | v1.33.1 | 02 Aug 24 10:25 PDT | 02 Aug 24 10:25 PDT |
	| start   | -o=json --download-only           | download-only-319000 | jenkins | v1.33.1 | 02 Aug 24 10:25 PDT |                     |
	|         | -p download-only-319000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 10:25:52
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 10:25:52.429426    1802 out.go:291] Setting OutFile to fd 1 ...
	I0802 10:25:52.429544    1802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:25:52.429548    1802 out.go:304] Setting ErrFile to fd 2...
	I0802 10:25:52.429550    1802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:25:52.429678    1802 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 10:25:52.430725    1802 out.go:298] Setting JSON to true
	I0802 10:25:52.446952    1802 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1515,"bootTime":1722618037,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 10:25:52.447012    1802 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 10:25:52.451637    1802 out.go:97] [download-only-319000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 10:25:52.451742    1802 notify.go:220] Checking for updates...
	I0802 10:25:52.455579    1802 out.go:169] MINIKUBE_LOCATION=19355
	I0802 10:25:52.459676    1802 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 10:25:52.462636    1802 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 10:25:52.465647    1802 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 10:25:52.468603    1802 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	W0802 10:25:52.474557    1802 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0802 10:25:52.474751    1802 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 10:25:52.477578    1802 out.go:97] Using the qemu2 driver based on user configuration
	I0802 10:25:52.477588    1802 start.go:297] selected driver: qemu2
	I0802 10:25:52.477590    1802 start.go:901] validating driver "qemu2" against <nil>
	I0802 10:25:52.477635    1802 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 10:25:52.480635    1802 out.go:169] Automatically selected the socket_vmnet network
	I0802 10:25:52.485945    1802 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0802 10:25:52.486045    1802 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0802 10:25:52.486076    1802 cni.go:84] Creating CNI manager for ""
	I0802 10:25:52.486083    1802 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 10:25:52.486090    1802 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 10:25:52.486133    1802 start.go:340] cluster config:
	{Name:download-only-319000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-319000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 10:25:52.489634    1802 iso.go:125] acquiring lock: {Name:mk1b9591af50d0b7e779bc2d15a992f86ae48189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 10:25:52.492642    1802 out.go:97] Starting "download-only-319000" primary control-plane node in "download-only-319000" cluster
	I0802 10:25:52.492651    1802 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0802 10:25:52.549387    1802 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0802 10:25:52.549403    1802 cache.go:56] Caching tarball of preloaded images
	I0802 10:25:52.549572    1802 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0802 10:25:52.553604    1802 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0802 10:25:52.553611    1802 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0802 10:25:52.631319    1802 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4?checksum=md5:c1f196b49f29ebea060b9249b6cb8e03 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0802 10:26:01.341046    1802 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0802 10:26:01.341173    1802 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0802 10:26:01.862814    1802 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0802 10:26:01.863008    1802 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/download-only-319000/config.json ...
	I0802 10:26:01.863024    1802 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/download-only-319000/config.json: {Name:mk0c290d65be1ac1e28c9adb26e5d2a79ff20b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 10:26:01.863250    1802 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0802 10:26:01.863368    1802 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19355-1243/.minikube/cache/darwin/arm64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-319000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-319000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-319000
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.28s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-023000 --alsologtostderr --binary-mirror http://127.0.0.1:49325 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-023000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-023000
--- PASS: TestBinaryMirror (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-326000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-326000: exit status 85 (54.468792ms)

                                                
                                                
-- stdout --
	* Profile "addons-326000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-326000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-326000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-326000: exit status 85 (58.425208ms)

                                                
                                                
-- stdout --
	* Profile "addons-326000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-326000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (257.86s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-326000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-326000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (4m17.855129084s)
--- PASS: TestAddons/Setup (257.86s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.03s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 7.867083ms
addons_test.go:913: volcano-controller stabilized in 7.895875ms
addons_test.go:905: volcano-admission stabilized in 7.930417ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-8fwxd" [724dcf09-daea-45dc-83ce-683ca6652101] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.0040785s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-wnm9k" [0fa0aede-6628-402c-89e0-6bd3db365d6a] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003947459s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-6jnpv" [adaee933-ba19-4f78-8a54-ec8addf10081] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004204375s
addons_test.go:932: (dbg) Run:  kubectl --context addons-326000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-326000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-326000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [d276780b-bfdf-4544-88bb-c87bd5558e60] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [d276780b-bfdf-4544-88bb-c87bd5558e60] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004073s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-326000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-326000 addons disable volcano --alsologtostderr -v=1: (9.804189166s)
--- PASS: TestAddons/serial/Volcano (38.03s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-326000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-326000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.153708ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-zbgsv" [3a4584db-35c1-4193-99a5-b7f72f45a7f3] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002445666s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mrw78" [383d7524-7b1f-4db3-9b45-85d9f92b6b83] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00383525s
addons_test.go:342: (dbg) Run:  kubectl --context addons-326000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-326000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-326000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.74575625s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-326000 ip
2024/08/02 10:31:41 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-326000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.01s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-326000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-326000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-326000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f34e29cc-8d0f-4bb9-b149-ff45dd5024c6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f34e29cc-8d0f-4bb9-b149-ff45dd5024c6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004001584s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-326000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-326000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-326000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-326000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-326000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-326000 addons disable ingress --alsologtostderr -v=1: (7.202319917s)
--- PASS: TestAddons/parallel/Ingress (18.33s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-tq5rg" [3ef377cd-5e41-4505-a569-6c0ca30e7f6b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004172375s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-326000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-326000: (5.213020834s)
--- PASS: TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.418708ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-grjzx" [77faf3ba-9388-48c8-9337-175243abdfc2] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003566417s
addons_test.go:417: (dbg) Run:  kubectl --context addons-326000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-326000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.24s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.613292ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-326000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-326000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [105d9687-c56f-421e-a8a1-d50c424b08a4] Pending
helpers_test.go:344: "task-pv-pod" [105d9687-c56f-421e-a8a1-d50c424b08a4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [105d9687-c56f-421e-a8a1-d50c424b08a4] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003658042s
addons_test.go:590: (dbg) Run:  kubectl --context addons-326000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-326000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-326000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-326000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-326000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-326000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-326000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5f51eb4b-4320-4115-9a99-f3934c2925eb] Pending
helpers_test.go:344: "task-pv-pod-restore" [5f51eb4b-4320-4115-9a99-f3934c2925eb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5f51eb4b-4320-4115-9a99-f3934c2925eb] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003759834s
addons_test.go:632: (dbg) Run:  kubectl --context addons-326000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-326000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-326000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-326000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-326000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.088578792s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-326000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.47s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-326000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-5h4mz" [abcc56e5-5e43-4b04-9341-fc3dbfc36887] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-5h4mz" [abcc56e5-5e43-4b04-9341-fc3dbfc36887] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003982458s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-326000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-326000 addons disable headlamp --alsologtostderr -v=1: (5.192871666s)
--- PASS: TestAddons/parallel/Headlamp (16.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-kmqk9" [1571e43f-4ed7-4786-a508-b3c853bd3152] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00368725s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-326000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.8s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-326000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-326000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3321bd2c-e151-4b9e-90d5-eb9076d26b29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3321bd2c-e151-4b9e-90d5-eb9076d26b29] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3321bd2c-e151-4b9e-90d5-eb9076d26b29] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003640833s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-326000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-326000 ssh "cat /opt/local-path-provisioner/pvc-0ee55461-4f52-4f3e-a7c3-c12c7329b489_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-326000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-326000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-326000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-326000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.346345208s)
--- PASS: TestAddons/parallel/LocalPath (40.80s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xnvgb" [4fa492af-e297-4c9b-8d93-01136234015f] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004727583s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-326000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-2vvkt" [6232678e-6b55-43c0-931b-7b79ca268b99] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0038405s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-326000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-326000 addons disable yakd --alsologtostderr -v=1: (5.197983875s)
--- PASS: TestAddons/parallel/Yakd (10.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-326000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-326000: (12.183674042s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-326000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-326000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-326000
--- PASS: TestAddons/StoppedEnableDisable (12.37s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.08s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (11.08s)

                                                
                                    
x
+
TestErrorSpam/setup (36.24s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-024000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-024000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 --driver=qemu2 : (36.236891709s)
--- PASS: TestErrorSpam/setup (36.24s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 pause
--- PASS: TestErrorSpam/pause (0.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 unpause
--- PASS: TestErrorSpam/unpause (0.58s)

                                                
                                    
x
+
TestErrorSpam/stop (64.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 stop: (12.197641875s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 stop: (26.057265208s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-024000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-024000 stop: (26.029543792s)
--- PASS: TestErrorSpam/stop (64.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19355-1243/.minikube/files/etc/test/nested/copy/1747/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-775000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0802 10:35:32.910907    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
E0802 10:35:32.917670    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
E0802 10:35:32.929751    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
E0802 10:35:32.951825    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
E0802 10:35:32.993875    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
E0802 10:35:33.075945    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
E0802 10:35:33.238031    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
E0802 10:35:33.560142    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
E0802 10:35:34.202270    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
E0802 10:35:35.484425    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
E0802 10:35:38.046548    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
E0802 10:35:43.167203    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-775000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (48.917614333s)
--- PASS: TestFunctional/serial/StartWithProxy (48.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.95s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-775000 --alsologtostderr -v=8
E0802 10:35:53.408873    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
E0802 10:36:13.890577    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-775000 --alsologtostderr -v=8: (35.946784584s)
functional_test.go:659: soft start took 35.947195417s for "functional-775000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.95s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-775000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-775000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local4267352948/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 cache add minikube-local-cache-test:functional-775000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 cache delete minikube-local-cache-test:functional-775000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-775000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-775000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (69.481042ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 kubectl -- --context functional-775000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-775000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.96s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-775000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0802 10:36:54.850130    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-775000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.257758042s)
functional_test.go:757: restart took 36.2578655s for "functional-775000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-775000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd3989165808/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.09s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-775000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-775000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-775000: exit status 115 (103.961458ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30817 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-775000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-775000 config get cpus: exit status 14 (29.454208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-775000 config get cpus: exit status 14 (28.85ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-775000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-775000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2850: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.53s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-775000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-775000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.187459ms)

                                                
                                                
-- stdout --
	* [functional-775000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 10:38:01.711375    2833 out.go:291] Setting OutFile to fd 1 ...
	I0802 10:38:01.711553    2833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:38:01.711556    2833 out.go:304] Setting ErrFile to fd 2...
	I0802 10:38:01.711558    2833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:38:01.711706    2833 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 10:38:01.712819    2833 out.go:298] Setting JSON to false
	I0802 10:38:01.730428    2833 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2244,"bootTime":1722618037,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 10:38:01.730508    2833 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 10:38:01.735370    2833 out.go:177] * [functional-775000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0802 10:38:01.744276    2833 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 10:38:01.744328    2833 notify.go:220] Checking for updates...
	I0802 10:38:01.751254    2833 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 10:38:01.755255    2833 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 10:38:01.758269    2833 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 10:38:01.761248    2833 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 10:38:01.764234    2833 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 10:38:01.767483    2833 config.go:182] Loaded profile config "functional-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 10:38:01.767760    2833 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 10:38:01.771184    2833 out.go:177] * Using the qemu2 driver based on existing profile
	I0802 10:38:01.778214    2833 start.go:297] selected driver: qemu2
	I0802 10:38:01.778223    2833 start.go:901] validating driver "qemu2" against &{Name:functional-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-775000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 10:38:01.778273    2833 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 10:38:01.785247    2833 out.go:177] 
	W0802 10:38:01.789044    2833 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0802 10:38:01.793227    2833 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-775000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-775000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-775000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.989291ms)

                                                
                                                
-- stdout --
	* [functional-775000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 10:38:01.943168    2844 out.go:291] Setting OutFile to fd 1 ...
	I0802 10:38:01.943275    2844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:38:01.943279    2844 out.go:304] Setting ErrFile to fd 2...
	I0802 10:38:01.943281    2844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 10:38:01.943405    2844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
	I0802 10:38:01.944678    2844 out.go:298] Setting JSON to false
	I0802 10:38:01.961973    2844 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2244,"bootTime":1722618037,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0802 10:38:01.962063    2844 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0802 10:38:01.967268    2844 out.go:177] * [functional-775000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0802 10:38:01.975377    2844 notify.go:220] Checking for updates...
	I0802 10:38:01.978201    2844 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 10:38:01.982278    2844 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	I0802 10:38:01.983550    2844 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0802 10:38:01.986200    2844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 10:38:01.989257    2844 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	I0802 10:38:01.992255    2844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 10:38:01.995491    2844 config.go:182] Loaded profile config "functional-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 10:38:01.995752    2844 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 10:38:02.000240    2844 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0802 10:38:02.007243    2844 start.go:297] selected driver: qemu2
	I0802 10:38:02.007250    2844 start.go:901] validating driver "qemu2" against &{Name:functional-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-775000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 10:38:02.007316    2844 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 10:38:02.013212    2844 out.go:177] 
	W0802 10:38:02.017234    2844 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0802 10:38:02.023163    2844 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [06ad72fa-c668-4c04-84db-20d4e2270648] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004013375s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-775000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-775000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-775000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-775000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [37011259-4b90-4614-979d-15ec443a3943] Pending
helpers_test.go:344: "sp-pod" [37011259-4b90-4614-979d-15ec443a3943] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [37011259-4b90-4614-979d-15ec443a3943] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004106459s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-775000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-775000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-775000 delete -f testdata/storage-provisioner/pod.yaml: (1.0679725s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-775000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ee4e9f4e-78b0-426c-a944-5e22c6d1487e] Pending
helpers_test.go:344: "sp-pod" [ee4e9f4e-78b0-426c-a944-5e22c6d1487e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ee4e9f4e-78b0-426c-a944-5e22c6d1487e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003967458s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-775000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh -n functional-775000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 cp functional-775000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd610214846/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh -n functional-775000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh -n functional-775000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1747/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "sudo cat /etc/test/nested/copy/1747/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1747.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "sudo cat /etc/ssl/certs/1747.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1747.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "sudo cat /usr/share/ca-certificates/1747.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/17472.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "sudo cat /etc/ssl/certs/17472.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/17472.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "sudo cat /usr/share/ca-certificates/17472.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-775000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-775000 ssh "sudo systemctl is-active crio": exit status 1 (72.207083ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-775000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-775000
docker.io/kicbase/echo-server:functional-775000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-775000 image ls --format short --alsologtostderr:
I0802 10:38:06.592140    2872 out.go:291] Setting OutFile to fd 1 ...
I0802 10:38:06.592478    2872 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 10:38:06.592486    2872 out.go:304] Setting ErrFile to fd 2...
I0802 10:38:06.592489    2872 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 10:38:06.592626    2872 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
I0802 10:38:06.593034    2872 config.go:182] Loaded profile config "functional-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 10:38:06.593097    2872 config.go:182] Loaded profile config "functional-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 10:38:06.593919    2872 ssh_runner.go:195] Run: systemctl --version
I0802 10:38:06.593926    2872 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/functional-775000/id_rsa Username:docker}
I0802 10:38:06.619500    2872 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-775000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/localhost/my-image                | functional-775000 | c37a13ca0a40c | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-775000 | c49cd8dab42fc | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| docker.io/kicbase/echo-server               | functional-775000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| docker.io/library/nginx                     | latest            | 43b17fe33c4b4 | 193MB  |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-775000 image ls --format table --alsologtostderr:
I0802 10:38:08.431064    2886 out.go:291] Setting OutFile to fd 1 ...
I0802 10:38:08.431201    2886 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 10:38:08.431205    2886 out.go:304] Setting ErrFile to fd 2...
I0802 10:38:08.431207    2886 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 10:38:08.431343    2886 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
I0802 10:38:08.431792    2886 config.go:182] Loaded profile config "functional-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 10:38:08.431868    2886 config.go:182] Loaded profile config "functional-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 10:38:08.432680    2886 ssh_runner.go:195] Run: systemctl --version
I0802 10:38:08.432691    2886 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/functional-775000/id_rsa Username:docker}
I0802 10:38:08.457615    2886 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image ls --format json --alsologtostderr
2024/08/02 10:38:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-775000 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-775000"],"size":"4780000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"c37a13ca0a40c111478044
3865b41db417d28547b6bfc1013289cd175cf5f52c","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-775000"],"size":"1410000"},{"id":"c49cd8dab42fccfda28ff040e6e15ee590501df72ba958f4871f10d4bf3e152b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-775000"],"size":"30"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repo
Tags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"19300
0000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-775000 image ls --format json --alsologtostderr:
I0802 10:38:08.361614    2883 out.go:291] Setting OutFile to fd 1 ...
I0802 10:38:08.361767    2883 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 10:38:08.361771    2883 out.go:304] Setting ErrFile to fd 2...
I0802 10:38:08.361773    2883 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 10:38:08.361902    2883 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
I0802 10:38:08.362335    2883 config.go:182] Loaded profile config "functional-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 10:38:08.362395    2883 config.go:182] Loaded profile config "functional-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 10:38:08.363210    2883 ssh_runner.go:195] Run: systemctl --version
I0802 10:38:08.363219    2883 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/functional-775000/id_rsa Username:docker}
I0802 10:38:08.387790    2883 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-775000 image ls --format yaml --alsologtostderr:
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-775000
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c49cd8dab42fccfda28ff040e6e15ee590501df72ba958f4871f10d4bf3e152b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-775000
size: "30"
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-775000 image ls --format yaml --alsologtostderr:
I0802 10:38:06.663477    2874 out.go:291] Setting OutFile to fd 1 ...
I0802 10:38:06.663651    2874 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 10:38:06.663654    2874 out.go:304] Setting ErrFile to fd 2...
I0802 10:38:06.663657    2874 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 10:38:06.663794    2874 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
I0802 10:38:06.664207    2874 config.go:182] Loaded profile config "functional-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 10:38:06.664267    2874 config.go:182] Loaded profile config "functional-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 10:38:06.665063    2874 ssh_runner.go:195] Run: systemctl --version
I0802 10:38:06.665072    2874 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/functional-775000/id_rsa Username:docker}
I0802 10:38:06.700758    2874 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-775000 ssh pgrep buildkitd: exit status 1 (63.235125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image build -t localhost/my-image:functional-775000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-775000 image build -t localhost/my-image:functional-775000 testdata/build --alsologtostderr: (1.480216208s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-775000 image build -t localhost/my-image:functional-775000 testdata/build --alsologtostderr:
I0802 10:38:06.813102    2878 out.go:291] Setting OutFile to fd 1 ...
I0802 10:38:06.813293    2878 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 10:38:06.813297    2878 out.go:304] Setting ErrFile to fd 2...
I0802 10:38:06.813299    2878 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 10:38:06.813421    2878 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-1243/.minikube/bin
I0802 10:38:06.813905    2878 config.go:182] Loaded profile config "functional-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 10:38:06.815020    2878 config.go:182] Loaded profile config "functional-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 10:38:06.815868    2878 ssh_runner.go:195] Run: systemctl --version
I0802 10:38:06.815877    2878 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19355-1243/.minikube/machines/functional-775000/id_rsa Username:docker}
I0802 10:38:06.841006    2878 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2992979051.tar
I0802 10:38:06.841066    2878 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0802 10:38:06.845463    2878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2992979051.tar
I0802 10:38:06.848313    2878 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2992979051.tar: stat -c "%s %y" /var/lib/minikube/build/build.2992979051.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2992979051.tar': No such file or directory
I0802 10:38:06.848333    2878 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2992979051.tar --> /var/lib/minikube/build/build.2992979051.tar (3072 bytes)
I0802 10:38:06.865594    2878 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2992979051
I0802 10:38:06.869398    2878 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2992979051 -xf /var/lib/minikube/build/build.2992979051.tar
I0802 10:38:06.873681    2878 docker.go:360] Building image: /var/lib/minikube/build/build.2992979051
I0802 10:38:06.873749    2878 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-775000 /var/lib/minikube/build/build.2992979051
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers done
#8 writing image sha256:c37a13ca0a40c1114780443865b41db417d28547b6bfc1013289cd175cf5f52c done
#8 naming to localhost/my-image:functional-775000 done
#8 DONE 0.0s
I0802 10:38:08.244036    2878 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-775000 /var/lib/minikube/build/build.2992979051: (1.37030275s)
I0802 10:38:08.244109    2878 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2992979051
I0802 10:38:08.248148    2878 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2992979051.tar
I0802 10:38:08.251823    2878 build_images.go:217] Built localhost/my-image:functional-775000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2992979051.tar
I0802 10:38:08.251842    2878 build_images.go:133] succeeded building to: functional-775000
I0802 10:38:08.251847    2878 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.753626209s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-775000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-775000 docker-env) && out/minikube-darwin-arm64 status -p functional-775000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-775000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-775000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-775000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-fvrvc" [7cecc339-b4bc-49bb-941d-1bf84f6767be] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-fvrvc" [7cecc339-b4bc-49bb-941d-1bf84f6767be] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.00417325s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image load --daemon docker.io/kicbase/echo-server:functional-775000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image load --daemon docker.io/kicbase/echo-server:functional-775000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-775000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image load --daemon docker.io/kicbase/echo-server:functional-775000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image save docker.io/kicbase/echo-server:functional-775000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image rm docker.io/kicbase/echo-server:functional-775000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-775000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 image save --daemon docker.io/kicbase/echo-server:functional-775000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-775000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-775000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-775000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-775000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2698: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-775000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-775000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-775000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ede36f9e-6c0a-493a-a996-63e851798c96] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ede36f9e-6c0a-493a-a996-63e851798c96] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.002818833s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 service list -o json
functional_test.go:1490: Took "83.676958ms" to run "out/minikube-darwin-arm64 -p functional-775000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:30714
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:30714
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-775000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.27.24 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-775000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "84.25375ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "34.821167ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "84.852333ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "33.5955ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-775000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1540775981/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722620272675951000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1540775981/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722620272675951000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1540775981/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722620272675951000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1540775981/001/test-1722620272675951000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-775000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (57.731667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-775000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (59.84925ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  2 17:37 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  2 17:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  2 17:37 test-1722620272675951000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh cat /mount-9p/test-1722620272675951000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-775000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a1dfdf11-be25-4873-897b-b0fb878bc087] Pending
helpers_test.go:344: "busybox-mount" [a1dfdf11-be25-4873-897b-b0fb878bc087] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a1dfdf11-be25-4873-897b-b0fb878bc087] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a1dfdf11-be25-4873-897b-b0fb878bc087] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004229375s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-775000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-775000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1540775981/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-775000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port980471019/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-775000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.57725ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-775000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port980471019/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-775000 ssh "sudo umount -f /mount-9p": exit status 1 (60.224125ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-775000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-775000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port980471019/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-775000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1012216766/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-775000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1012216766/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-775000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1012216766/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-775000 ssh "findmnt -T" /mount1: exit status 1 (83.356875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-775000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-775000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-775000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1012216766/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-775000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1012216766/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-775000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1012216766/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.88s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-775000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-775000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-775000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (193.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-982000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0802 10:38:16.770263    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
E0802 10:40:32.901545    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
E0802 10:41:00.608301    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-982000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m13.575704667s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (193.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-982000 -- rollout status deployment/busybox: (3.819837791s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- exec busybox-fc5497c4f-kslh6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- exec busybox-fc5497c4f-n4b27 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- exec busybox-fc5497c4f-sg99w -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- exec busybox-fc5497c4f-kslh6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- exec busybox-fc5497c4f-n4b27 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- exec busybox-fc5497c4f-sg99w -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- exec busybox-fc5497c4f-kslh6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- exec busybox-fc5497c4f-n4b27 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- exec busybox-fc5497c4f-sg99w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- exec busybox-fc5497c4f-kslh6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- exec busybox-fc5497c4f-kslh6 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- exec busybox-fc5497c4f-n4b27 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- exec busybox-fc5497c4f-n4b27 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- exec busybox-fc5497c4f-sg99w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-982000 -- exec busybox-fc5497c4f-sg99w -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-982000 -v=7 --alsologtostderr
E0802 10:42:15.116523    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
E0802 10:42:15.122849    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
E0802 10:42:15.134963    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
E0802 10:42:15.157031    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
E0802 10:42:15.199103    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
E0802 10:42:15.279375    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
E0802 10:42:15.441489    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
E0802 10:42:15.762737    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
E0802 10:42:16.404839    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
E0802 10:42:17.686401    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
E0802 10:42:20.248458    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-982000 -v=7 --alsologtostderr: (54.968000958s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-982000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp testdata/cp-test.txt ha-982000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp ha-982000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile209904708/001/cp-test_ha-982000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp ha-982000:/home/docker/cp-test.txt ha-982000-m02:/home/docker/cp-test_ha-982000_ha-982000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m02 "sudo cat /home/docker/cp-test_ha-982000_ha-982000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp ha-982000:/home/docker/cp-test.txt ha-982000-m03:/home/docker/cp-test_ha-982000_ha-982000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m03 "sudo cat /home/docker/cp-test_ha-982000_ha-982000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp ha-982000:/home/docker/cp-test.txt ha-982000-m04:/home/docker/cp-test_ha-982000_ha-982000-m04.txt
E0802 10:42:25.370533    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m04 "sudo cat /home/docker/cp-test_ha-982000_ha-982000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp testdata/cp-test.txt ha-982000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp ha-982000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile209904708/001/cp-test_ha-982000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp ha-982000-m02:/home/docker/cp-test.txt ha-982000:/home/docker/cp-test_ha-982000-m02_ha-982000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000 "sudo cat /home/docker/cp-test_ha-982000-m02_ha-982000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp ha-982000-m02:/home/docker/cp-test.txt ha-982000-m03:/home/docker/cp-test_ha-982000-m02_ha-982000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m03 "sudo cat /home/docker/cp-test_ha-982000-m02_ha-982000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp ha-982000-m02:/home/docker/cp-test.txt ha-982000-m04:/home/docker/cp-test_ha-982000-m02_ha-982000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m04 "sudo cat /home/docker/cp-test_ha-982000-m02_ha-982000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp testdata/cp-test.txt ha-982000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp ha-982000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile209904708/001/cp-test_ha-982000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp ha-982000-m03:/home/docker/cp-test.txt ha-982000:/home/docker/cp-test_ha-982000-m03_ha-982000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000 "sudo cat /home/docker/cp-test_ha-982000-m03_ha-982000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp ha-982000-m03:/home/docker/cp-test.txt ha-982000-m02:/home/docker/cp-test_ha-982000-m03_ha-982000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m02 "sudo cat /home/docker/cp-test_ha-982000-m03_ha-982000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp ha-982000-m03:/home/docker/cp-test.txt ha-982000-m04:/home/docker/cp-test_ha-982000-m03_ha-982000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m04 "sudo cat /home/docker/cp-test_ha-982000-m03_ha-982000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp testdata/cp-test.txt ha-982000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp ha-982000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile209904708/001/cp-test_ha-982000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp ha-982000-m04:/home/docker/cp-test.txt ha-982000:/home/docker/cp-test_ha-982000-m04_ha-982000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000 "sudo cat /home/docker/cp-test_ha-982000-m04_ha-982000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp ha-982000-m04:/home/docker/cp-test.txt ha-982000-m02:/home/docker/cp-test_ha-982000-m04_ha-982000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m02 "sudo cat /home/docker/cp-test_ha-982000-m04_ha-982000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 cp ha-982000-m04:/home/docker/cp-test.txt ha-982000-m03:/home/docker/cp-test_ha-982000-m04_ha-982000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-982000 ssh -n ha-982000-m03 "sudo cat /home/docker/cp-test_ha-982000-m04_ha-982000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0802 10:51:55.895853    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.551983167s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-566000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-566000 --output=json --user=testUser: (2.884821958s)
--- PASS: TestJSONOutput/stop/Command (2.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-033000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-033000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.92375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"78d5c464-d2d3-4362-a5e4-1a670724cbb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-033000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d9acd17b-df63-4a98-b7c0-c88819c923e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19355"}}
	{"specversion":"1.0","id":"681de76a-35fb-4e7c-ab10-5fbd8f0edc09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig"}}
	{"specversion":"1.0","id":"d1e396c8-da96-4956-ae54-ad5dbf86b855","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"f74e8c3f-4e7f-47f6-9dee-dc67dfdf4a1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"77fe8725-8bdc-40dc-8b5a-f2b541abad2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube"}}
	{"specversion":"1.0","id":"1c03073a-47e7-4b5d-a902-4818d08b1306","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d48d3f04-04a1-4f29-b049-e9a97790d323","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-033000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-033000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-965000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-965000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (98.665958ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-965000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-1243/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-1243/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-965000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-965000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.225916ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-965000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-965000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
E0802 11:15:18.071109    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/functional-775000/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.706312s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
E0802 11:15:32.786160    1747 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19355-1243/.minikube/profiles/addons-326000/client.crt: no such file or directory
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.748572292s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-965000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-965000: (3.092885833s)
--- PASS: TestNoKubernetes/serial/Stop (3.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-965000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-965000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.540167ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-965000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-965000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-387000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-752000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-752000 --alsologtostderr -v=3: (3.358903208s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-752000 -n old-k8s-version-752000: exit status 7 (45.585334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-752000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-501000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-501000 --alsologtostderr -v=3: (3.628561s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-501000 -n no-preload-501000: exit status 7 (47.850292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-501000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-797000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-797000 --alsologtostderr -v=3: (1.980612667s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-797000 -n embed-certs-797000: exit status 7 (53.114084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-797000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-171000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-171000 --alsologtostderr -v=3: (1.976817833s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-171000 -n default-k8s-diff-port-171000: exit status 7 (60.574584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-171000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-671000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-671000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-671000 --alsologtostderr -v=3: (3.319569083s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-671000 -n newest-cni-671000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-671000 -n newest-cni-671000: exit status 7 (56.167834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-671000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/282)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-800000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-800000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-800000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-800000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-800000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-800000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-800000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-800000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-800000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-800000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-800000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-800000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-800000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-800000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-800000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-800000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-800000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-800000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-800000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-800000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-800000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-800000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-800000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-800000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-800000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-800000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-800000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-800000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-800000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-800000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-800000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-800000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-800000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800000"

                                                
                                                
----------------------- debugLogs end: cilium-800000 [took: 2.294043416s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-800000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-800000
--- SKIP: TestNetworkPlugins/group/cilium (2.40s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-107000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-107000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard