Test Report: QEMU_macOS 19312

                    
                      5c64880be4606435f09036ce2ec4c937eccc350b:2024-07-28:35539
                    
                

Test fail (94/278)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.87
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.46
55 TestCertOptions 12.26
56 TestCertExpiration 197.63
57 TestDockerFlags 12.45
58 TestForceSystemdFlag 11.59
59 TestForceSystemdEnv 10.17
104 TestFunctional/parallel/ServiceCmdConnect 36.55
176 TestMultiControlPlane/serial/StopSecondaryNode 312.32
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 225.12
178 TestMultiControlPlane/serial/RestartSecondaryNode 305.23
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 332.54
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
183 TestMultiControlPlane/serial/StopCluster 93.61
186 TestImageBuild/serial/Setup 10.35
189 TestJSONOutput/start/Command 9.78
195 TestJSONOutput/pause/Command 0.05
201 TestJSONOutput/unpause/Command 0.05
218 TestMinikubeProfile 10.11
221 TestMountStart/serial/StartWithMountFirst 10.01
224 TestMultiNode/serial/FreshStart2Nodes 9.92
225 TestMultiNode/serial/DeployApp2Nodes 112.72
226 TestMultiNode/serial/PingHostFrom2Pods 0.08
227 TestMultiNode/serial/AddNode 0.07
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.08
230 TestMultiNode/serial/CopyFile 0.06
231 TestMultiNode/serial/StopNode 0.14
232 TestMultiNode/serial/StartAfterStop 42.05
233 TestMultiNode/serial/RestartKeepsNodes 8.94
234 TestMultiNode/serial/DeleteNode 0.1
235 TestMultiNode/serial/StopMultiNode 3.51
236 TestMultiNode/serial/RestartMultiNode 5.26
237 TestMultiNode/serial/ValidateNameConflict 20.51
241 TestPreload 10.16
243 TestScheduledStopUnix 9.91
244 TestSkaffold 12.96
247 TestRunningBinaryUpgrade 589.84
249 TestKubernetesUpgrade 18.01
262 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.72
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 2.71
265 TestStoppedBinaryUpgrade/Upgrade 562.81
267 TestPause/serial/Start 9.85
277 TestNoKubernetes/serial/StartWithK8s 9.99
278 TestNoKubernetes/serial/StartWithStopK8s 5.3
279 TestNoKubernetes/serial/Start 5.31
283 TestNoKubernetes/serial/StartNoArgs 5.34
285 TestNetworkPlugins/group/auto/Start 9.76
286 TestNetworkPlugins/group/kindnet/Start 9.7
287 TestNetworkPlugins/group/calico/Start 10.07
288 TestNetworkPlugins/group/custom-flannel/Start 9.84
289 TestNetworkPlugins/group/false/Start 9.96
290 TestNetworkPlugins/group/enable-default-cni/Start 9.83
291 TestNetworkPlugins/group/flannel/Start 9.87
292 TestNetworkPlugins/group/bridge/Start 9.78
293 TestNetworkPlugins/group/kubenet/Start 9.73
296 TestStartStop/group/old-k8s-version/serial/FirstStart 9.89
297 TestStartStop/group/old-k8s-version/serial/DeployApp 0.08
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
301 TestStartStop/group/old-k8s-version/serial/SecondStart 5.29
302 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
305 TestStartStop/group/old-k8s-version/serial/Pause 0.1
307 TestStartStop/group/no-preload/serial/FirstStart 9.83
308 TestStartStop/group/no-preload/serial/DeployApp 0.09
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
312 TestStartStop/group/embed-certs/serial/FirstStart 10.02
314 TestStartStop/group/no-preload/serial/SecondStart 5.84
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.07
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
318 TestStartStop/group/no-preload/serial/Pause 0.1
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.56
321 TestStartStop/group/embed-certs/serial/DeployApp 0.1
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.13
325 TestStartStop/group/embed-certs/serial/SecondStart 5.81
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
331 TestStartStop/group/embed-certs/serial/Pause 0.11
334 TestStartStop/group/newest-cni/serial/FirstStart 9.94
336 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.89
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
345 TestStartStop/group/newest-cni/serial/SecondStart 5.26
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
349 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (14.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-504000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-504000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (14.871297917s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e764b075-75a7-464c-96ce-32f934577379","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-504000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d57fde09-5a39-4e7a-a40b-71451589dc19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"3371ad3a-ce49-43f8-a091-39b5054632cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig"}}
	{"specversion":"1.0","id":"1f75f182-af92-4455-808f-b332578488bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"fb38d142-b6f6-4a2c-ae65-9365d88611c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"15d21d80-23d4-4a9d-8955-3a05a0607944","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube"}}
	{"specversion":"1.0","id":"3838f882-f38b-4dff-87b0-df8e195ee19f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"598462d6-809e-495b-90c0-10c770316d80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7db49062-713f-453e-85f9-05eb0c890321","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"3e12c9a6-e33a-4954-bf12-d8a9469b4a49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1712c475-b4b6-4012-b12a-119e0c6b1c7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-504000\" primary control-plane node in \"download-only-504000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b6ae97ca-4ada-45b1-8653-9bc8a60c84fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5e614b18-6cf4-4e72-9a28-88cb3f178523","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104b79a80 0x104b79a80 0x104b79a80 0x104b79a80 0x104b79a80 0x104b79a80 0x104b79a80] Decompressors:map[bz2:0x140004ba6b0 gz:0x140004ba6b8 tar:0x140004ba600 tar.bz2:0x140004ba620 tar.gz:0x140004ba630 tar.xz:0x140004ba650 tar.zst:0x140004ba680 tbz2:0x140004ba620 tgz:0x14
0004ba630 txz:0x140004ba650 tzst:0x140004ba680 xz:0x140004ba6c0 zip:0x140004ba6d0 zst:0x140004ba6c8] Getters:map[file:0x1400078ab70 http:0x14000178eb0 https:0x14000178f50] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"732e88b6-7c9f-411f-b8f9-bf29e0bc00ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 17:45:49.509310    1730 out.go:291] Setting OutFile to fd 1 ...
	I0728 17:45:49.509449    1730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:45:49.509452    1730 out.go:304] Setting ErrFile to fd 2...
	I0728 17:45:49.509454    1730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:45:49.509581    1730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	W0728 17:45:49.509662    1730 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19312-1229/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19312-1229/.minikube/config/config.json: no such file or directory
	I0728 17:45:49.510980    1730 out.go:298] Setting JSON to true
	I0728 17:45:49.530267    1730 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":920,"bootTime":1722213029,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 17:45:49.530328    1730 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 17:45:49.536111    1730 out.go:97] [download-only-504000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 17:45:49.536249    1730 notify.go:220] Checking for updates...
	W0728 17:45:49.536258    1730 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball: no such file or directory
	I0728 17:45:49.540046    1730 out.go:169] MINIKUBE_LOCATION=19312
	I0728 17:45:49.543047    1730 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 17:45:49.547052    1730 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 17:45:49.550039    1730 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 17:45:49.553033    1730 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	W0728 17:45:49.559006    1730 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0728 17:45:49.559203    1730 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 17:45:49.565180    1730 out.go:97] Using the qemu2 driver based on user configuration
	I0728 17:45:49.565195    1730 start.go:297] selected driver: qemu2
	I0728 17:45:49.565208    1730 start.go:901] validating driver "qemu2" against <nil>
	I0728 17:45:49.565260    1730 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 17:45:49.569072    1730 out.go:169] Automatically selected the socket_vmnet network
	I0728 17:45:49.574767    1730 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0728 17:45:49.574855    1730 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0728 17:45:49.574904    1730 cni.go:84] Creating CNI manager for ""
	I0728 17:45:49.574922    1730 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0728 17:45:49.574975    1730 start.go:340] cluster config:
	{Name:download-only-504000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:45:49.580610    1730 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:45:49.585383    1730 out.go:97] Downloading VM boot image ...
	I0728 17:45:49.585409    1730 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0728 17:45:55.808779    1730 out.go:97] Starting "download-only-504000" primary control-plane node in "download-only-504000" cluster
	I0728 17:45:55.808800    1730 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0728 17:45:55.868981    1730 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0728 17:45:55.868998    1730 cache.go:56] Caching tarball of preloaded images
	I0728 17:45:55.869173    1730 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0728 17:45:55.873742    1730 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0728 17:45:55.873750    1730 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0728 17:45:55.952849    1730 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0728 17:46:02.926828    1730 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0728 17:46:02.927014    1730 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0728 17:46:03.627511    1730 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0728 17:46:03.627715    1730 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/download-only-504000/config.json ...
	I0728 17:46:03.627732    1730 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/download-only-504000/config.json: {Name:mkc1eb2c526791a45f2480b9b9e481cfc6c3a312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 17:46:03.628017    1730 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0728 17:46:03.628214    1730 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0728 17:46:04.312164    1730 out.go:169] 
	W0728 17:46:04.319238    1730 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104b79a80 0x104b79a80 0x104b79a80 0x104b79a80 0x104b79a80 0x104b79a80 0x104b79a80] Decompressors:map[bz2:0x140004ba6b0 gz:0x140004ba6b8 tar:0x140004ba600 tar.bz2:0x140004ba620 tar.gz:0x140004ba630 tar.xz:0x140004ba650 tar.zst:0x140004ba680 tbz2:0x140004ba620 tgz:0x140004ba630 txz:0x140004ba650 tzst:0x140004ba680 xz:0x140004ba6c0 zip:0x140004ba6d0 zst:0x140004ba6c8] Getters:map[file:0x1400078ab70 http:0x14000178eb0 https:0x14000178f50] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0728 17:46:04.319272    1730 out_reason.go:110] 
	W0728 17:46:04.324690    1730 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 17:46:04.327620    1730 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-504000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (14.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.46s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-488000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-488000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.311268416s)

                                                
                                                
-- stdout --
	* [offline-docker-488000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-488000" primary control-plane node in "offline-docker-488000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-488000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:32:08.859616    4488 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:32:08.859740    4488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:32:08.859743    4488 out.go:304] Setting ErrFile to fd 2...
	I0728 18:32:08.859746    4488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:32:08.859864    4488 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:32:08.860974    4488 out.go:298] Setting JSON to false
	I0728 18:32:08.878506    4488 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3699,"bootTime":1722213029,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:32:08.878584    4488 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:32:08.883446    4488 out.go:177] * [offline-docker-488000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:32:08.891492    4488 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:32:08.891507    4488 notify.go:220] Checking for updates...
	I0728 18:32:08.897381    4488 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:32:08.900471    4488 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:32:08.903357    4488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:32:08.906422    4488 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:32:08.909492    4488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:32:08.911058    4488 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:32:08.911120    4488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:32:08.915419    4488 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:32:08.922275    4488 start.go:297] selected driver: qemu2
	I0728 18:32:08.922284    4488 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:32:08.922291    4488 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:32:08.924088    4488 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:32:08.927408    4488 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:32:08.930490    4488 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:32:08.930515    4488 cni.go:84] Creating CNI manager for ""
	I0728 18:32:08.930521    4488 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:32:08.930525    4488 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 18:32:08.930565    4488 start.go:340] cluster config:
	{Name:offline-docker-488000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:32:08.934264    4488 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:32:08.941420    4488 out.go:177] * Starting "offline-docker-488000" primary control-plane node in "offline-docker-488000" cluster
	I0728 18:32:08.945460    4488 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:32:08.945483    4488 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:32:08.945490    4488 cache.go:56] Caching tarball of preloaded images
	I0728 18:32:08.945555    4488 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:32:08.945561    4488 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:32:08.945617    4488 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/offline-docker-488000/config.json ...
	I0728 18:32:08.945631    4488 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/offline-docker-488000/config.json: {Name:mke970d0b45bbbc9a5c9abf80e90de9cebfa746f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:32:08.945916    4488 start.go:360] acquireMachinesLock for offline-docker-488000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:32:08.945949    4488 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "offline-docker-488000"
	I0728 18:32:08.945960    4488 start.go:93] Provisioning new machine with config: &{Name:offline-docker-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:32:08.945986    4488 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:32:08.950410    4488 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0728 18:32:08.966231    4488 start.go:159] libmachine.API.Create for "offline-docker-488000" (driver="qemu2")
	I0728 18:32:08.966271    4488 client.go:168] LocalClient.Create starting
	I0728 18:32:08.966347    4488 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:32:08.966380    4488 main.go:141] libmachine: Decoding PEM data...
	I0728 18:32:08.966397    4488 main.go:141] libmachine: Parsing certificate...
	I0728 18:32:08.966443    4488 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:32:08.966469    4488 main.go:141] libmachine: Decoding PEM data...
	I0728 18:32:08.966477    4488 main.go:141] libmachine: Parsing certificate...
	I0728 18:32:08.966919    4488 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:32:09.115173    4488 main.go:141] libmachine: Creating SSH key...
	I0728 18:32:09.295607    4488 main.go:141] libmachine: Creating Disk image...
	I0728 18:32:09.295618    4488 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:32:09.300083    4488 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/disk.qcow2
	I0728 18:32:09.314614    4488 main.go:141] libmachine: STDOUT: 
	I0728 18:32:09.314644    4488 main.go:141] libmachine: STDERR: 
	I0728 18:32:09.314732    4488 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/disk.qcow2 +20000M
	I0728 18:32:09.325514    4488 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:32:09.325543    4488 main.go:141] libmachine: STDERR: 
	I0728 18:32:09.325559    4488 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/disk.qcow2
	I0728 18:32:09.325565    4488 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:32:09.325577    4488 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:32:09.325619    4488 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:cf:f1:bb:db:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/disk.qcow2
	I0728 18:32:09.327858    4488 main.go:141] libmachine: STDOUT: 
	I0728 18:32:09.327880    4488 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:32:09.327906    4488 client.go:171] duration metric: took 361.6285ms to LocalClient.Create
	I0728 18:32:11.330023    4488 start.go:128] duration metric: took 2.384022416s to createHost
	I0728 18:32:11.330057    4488 start.go:83] releasing machines lock for "offline-docker-488000", held for 2.3841015s
	W0728 18:32:11.330093    4488 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:32:11.344344    4488 out.go:177] * Deleting "offline-docker-488000" in qemu2 ...
	W0728 18:32:11.356442    4488 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:32:11.356457    4488 start.go:729] Will try again in 5 seconds ...
	I0728 18:32:16.358587    4488 start.go:360] acquireMachinesLock for offline-docker-488000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:32:16.358737    4488 start.go:364] duration metric: took 111.625µs to acquireMachinesLock for "offline-docker-488000"
	I0728 18:32:16.358782    4488 start.go:93] Provisioning new machine with config: &{Name:offline-docker-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:32:16.358836    4488 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:32:16.397312    4488 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0728 18:32:16.420288    4488 start.go:159] libmachine.API.Create for "offline-docker-488000" (driver="qemu2")
	I0728 18:32:16.420328    4488 client.go:168] LocalClient.Create starting
	I0728 18:32:16.420402    4488 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:32:16.420442    4488 main.go:141] libmachine: Decoding PEM data...
	I0728 18:32:16.420456    4488 main.go:141] libmachine: Parsing certificate...
	I0728 18:32:16.420494    4488 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:32:16.420522    4488 main.go:141] libmachine: Decoding PEM data...
	I0728 18:32:16.420531    4488 main.go:141] libmachine: Parsing certificate...
	I0728 18:32:16.420830    4488 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:32:16.779067    4488 main.go:141] libmachine: Creating SSH key...
	I0728 18:32:17.081811    4488 main.go:141] libmachine: Creating Disk image...
	I0728 18:32:17.081822    4488 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:32:17.082053    4488 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/disk.qcow2
	I0728 18:32:17.091539    4488 main.go:141] libmachine: STDOUT: 
	I0728 18:32:17.091559    4488 main.go:141] libmachine: STDERR: 
	I0728 18:32:17.091609    4488 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/disk.qcow2 +20000M
	I0728 18:32:17.099695    4488 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:32:17.099710    4488 main.go:141] libmachine: STDERR: 
	I0728 18:32:17.099719    4488 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/disk.qcow2
	I0728 18:32:17.099724    4488 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:32:17.099750    4488 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:32:17.099783    4488 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:dd:9a:73:a1:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/offline-docker-488000/disk.qcow2
	I0728 18:32:17.101312    4488 main.go:141] libmachine: STDOUT: 
	I0728 18:32:17.101326    4488 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:32:17.101340    4488 client.go:171] duration metric: took 681.009042ms to LocalClient.Create
	I0728 18:32:19.103531    4488 start.go:128] duration metric: took 2.744666791s to createHost
	I0728 18:32:19.103590    4488 start.go:83] releasing machines lock for "offline-docker-488000", held for 2.744826417s
	W0728 18:32:19.104016    4488 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-488000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-488000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:32:19.112564    4488 out.go:177] 
	W0728 18:32:19.116710    4488 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:32:19.116738    4488 out.go:239] * 
	* 
	W0728 18:32:19.119358    4488 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:32:19.128394    4488 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-488000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-28 18:32:19.144433 -0700 PDT m=+2789.701062793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-488000 -n offline-docker-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-488000 -n offline-docker-488000: exit status 7 (65.539541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-488000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-488000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-488000
--- FAIL: TestOffline (10.46s)

                                                
                                    
x
+
TestCertOptions (12.26s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-660000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-660000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (11.991599916s)

                                                
                                                
-- stdout --
	* [cert-options-660000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-660000" primary control-plane node in "cert-options-660000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-660000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-660000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-660000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-660000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.876834ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-660000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-660000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-660000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-660000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-660000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-660000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (38.361875ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-660000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-660000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-660000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-660000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-660000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-28 18:32:54.060605 -0700 PDT m=+2824.617249751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-660000 -n cert-options-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-660000 -n cert-options-660000: exit status 7 (29.753333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-660000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-660000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-660000
--- FAIL: TestCertOptions (12.26s)

                                                
                                    
x
+
TestCertExpiration (197.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-273000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-273000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.314127458s)

                                                
                                                
-- stdout --
	* [cert-expiration-273000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-273000" primary control-plane node in "cert-expiration-273000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-273000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-273000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-273000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-273000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-273000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.197145083s)

                                                
                                                
-- stdout --
	* [cert-expiration-273000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-273000" primary control-plane node in "cert-expiration-273000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-273000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-273000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-273000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-273000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-273000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-273000" primary control-plane node in "cert-expiration-273000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-273000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-273000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-273000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-28 18:35:56.831425 -0700 PDT m=+3007.388148043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-273000 -n cert-expiration-273000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-273000 -n cert-expiration-273000: exit status 7 (33.532792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-273000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-273000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-273000
--- FAIL: TestCertExpiration (197.63s)

                                                
                                    
x
+
TestDockerFlags (12.45s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-864000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-864000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.056984417s)

                                                
                                                
-- stdout --
	* [docker-flags-864000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-864000" primary control-plane node in "docker-flags-864000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-864000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:32:29.495892    4677 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:32:29.496033    4677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:32:29.496039    4677 out.go:304] Setting ErrFile to fd 2...
	I0728 18:32:29.496042    4677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:32:29.496178    4677 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:32:29.497238    4677 out.go:298] Setting JSON to false
	I0728 18:32:29.513512    4677 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3720,"bootTime":1722213029,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:32:29.513575    4677 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:32:29.531423    4677 out.go:177] * [docker-flags-864000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:32:29.540542    4677 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:32:29.540562    4677 notify.go:220] Checking for updates...
	I0728 18:32:29.548485    4677 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:32:29.552544    4677 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:32:29.555464    4677 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:32:29.558555    4677 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:32:29.561516    4677 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:32:29.564838    4677 config.go:182] Loaded profile config "force-systemd-flag-777000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:32:29.564903    4677 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:32:29.564957    4677 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:32:29.567602    4677 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:32:29.576523    4677 start.go:297] selected driver: qemu2
	I0728 18:32:29.576531    4677 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:32:29.576540    4677 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:32:29.578631    4677 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:32:29.582539    4677 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:32:29.584178    4677 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0728 18:32:29.584205    4677 cni.go:84] Creating CNI manager for ""
	I0728 18:32:29.584212    4677 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:32:29.584225    4677 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 18:32:29.584249    4677 start.go:340] cluster config:
	{Name:docker-flags-864000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:32:29.587539    4677 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:32:29.590677    4677 out.go:177] * Starting "docker-flags-864000" primary control-plane node in "docker-flags-864000" cluster
	I0728 18:32:29.597495    4677 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:32:29.597508    4677 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:32:29.597517    4677 cache.go:56] Caching tarball of preloaded images
	I0728 18:32:29.597571    4677 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:32:29.597576    4677 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:32:29.597639    4677 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/docker-flags-864000/config.json ...
	I0728 18:32:29.597648    4677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/docker-flags-864000/config.json: {Name:mk7754554d651d72e1bbe301e2563aab94555925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:32:29.597847    4677 start.go:360] acquireMachinesLock for docker-flags-864000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:32:31.590785    4677 start.go:364] duration metric: took 1.992916208s to acquireMachinesLock for "docker-flags-864000"
	I0728 18:32:31.590900    4677 start.go:93] Provisioning new machine with config: &{Name:docker-flags-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:32:31.591094    4677 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:32:31.599643    4677 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0728 18:32:31.649376    4677 start.go:159] libmachine.API.Create for "docker-flags-864000" (driver="qemu2")
	I0728 18:32:31.649425    4677 client.go:168] LocalClient.Create starting
	I0728 18:32:31.649568    4677 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:32:31.649623    4677 main.go:141] libmachine: Decoding PEM data...
	I0728 18:32:31.649640    4677 main.go:141] libmachine: Parsing certificate...
	I0728 18:32:31.649712    4677 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:32:31.649755    4677 main.go:141] libmachine: Decoding PEM data...
	I0728 18:32:31.649769    4677 main.go:141] libmachine: Parsing certificate...
	I0728 18:32:31.650573    4677 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:32:31.807772    4677 main.go:141] libmachine: Creating SSH key...
	I0728 18:32:31.877897    4677 main.go:141] libmachine: Creating Disk image...
	I0728 18:32:31.877902    4677 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:32:31.878124    4677 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/disk.qcow2
	I0728 18:32:31.887263    4677 main.go:141] libmachine: STDOUT: 
	I0728 18:32:31.887284    4677 main.go:141] libmachine: STDERR: 
	I0728 18:32:31.887334    4677 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/disk.qcow2 +20000M
	I0728 18:32:31.895290    4677 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:32:31.895304    4677 main.go:141] libmachine: STDERR: 
	I0728 18:32:31.895319    4677 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/disk.qcow2
	I0728 18:32:31.895323    4677 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:32:31.895334    4677 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:32:31.895359    4677 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:9b:8c:7a:b5:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/disk.qcow2
	I0728 18:32:31.896946    4677 main.go:141] libmachine: STDOUT: 
	I0728 18:32:31.896959    4677 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:32:31.896982    4677 client.go:171] duration metric: took 247.550084ms to LocalClient.Create
	I0728 18:32:33.899171    4677 start.go:128] duration metric: took 2.308047208s to createHost
	I0728 18:32:33.899303    4677 start.go:83] releasing machines lock for "docker-flags-864000", held for 2.308414083s
	W0728 18:32:33.899354    4677 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:32:33.920296    4677 out.go:177] * Deleting "docker-flags-864000" in qemu2 ...
	W0728 18:32:33.950076    4677 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:32:33.950102    4677 start.go:729] Will try again in 5 seconds ...
	I0728 18:32:38.952397    4677 start.go:360] acquireMachinesLock for docker-flags-864000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:32:38.955310    4677 start.go:364] duration metric: took 2.771125ms to acquireMachinesLock for "docker-flags-864000"
	I0728 18:32:38.955447    4677 start.go:93] Provisioning new machine with config: &{Name:docker-flags-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:32:38.955702    4677 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:32:38.966100    4677 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0728 18:32:39.018373    4677 start.go:159] libmachine.API.Create for "docker-flags-864000" (driver="qemu2")
	I0728 18:32:39.018423    4677 client.go:168] LocalClient.Create starting
	I0728 18:32:39.018505    4677 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:32:39.018558    4677 main.go:141] libmachine: Decoding PEM data...
	I0728 18:32:39.018577    4677 main.go:141] libmachine: Parsing certificate...
	I0728 18:32:39.018641    4677 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:32:39.018672    4677 main.go:141] libmachine: Decoding PEM data...
	I0728 18:32:39.018686    4677 main.go:141] libmachine: Parsing certificate...
	I0728 18:32:39.019205    4677 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:32:39.344054    4677 main.go:141] libmachine: Creating SSH key...
	I0728 18:32:39.457977    4677 main.go:141] libmachine: Creating Disk image...
	I0728 18:32:39.457984    4677 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:32:39.458160    4677 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/disk.qcow2
	I0728 18:32:39.467125    4677 main.go:141] libmachine: STDOUT: 
	I0728 18:32:39.467146    4677 main.go:141] libmachine: STDERR: 
	I0728 18:32:39.467194    4677 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/disk.qcow2 +20000M
	I0728 18:32:39.474921    4677 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:32:39.474936    4677 main.go:141] libmachine: STDERR: 
	I0728 18:32:39.474954    4677 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/disk.qcow2
	I0728 18:32:39.474959    4677 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:32:39.474972    4677 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:32:39.474997    4677 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:1f:20:19:15:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/docker-flags-864000/disk.qcow2
	I0728 18:32:39.476528    4677 main.go:141] libmachine: STDOUT: 
	I0728 18:32:39.476543    4677 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:32:39.476557    4677 client.go:171] duration metric: took 458.130334ms to LocalClient.Create
	I0728 18:32:41.478820    4677 start.go:128] duration metric: took 2.523060083s to createHost
	I0728 18:32:41.478971    4677 start.go:83] releasing machines lock for "docker-flags-864000", held for 2.523638291s
	W0728 18:32:41.479254    4677 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:32:41.492774    4677 out.go:177] 
	W0728 18:32:41.497704    4677 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:32:41.497738    4677 out.go:239] * 
	* 
	W0728 18:32:41.500604    4677 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:32:41.509722    4677 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-864000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-864000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-864000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (87.724791ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-864000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-864000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-864000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-864000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-864000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-864000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-864000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-864000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-864000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (95.449541ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-864000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-864000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-864000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-864000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-864000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-864000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-28 18:32:41.704518 -0700 PDT m=+2812.261157709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-864000 -n docker-flags-864000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-864000 -n docker-flags-864000: exit status 7 (35.982292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-864000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-864000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-864000
--- FAIL: TestDockerFlags (12.45s)

                                                
                                    
x
+
TestForceSystemdFlag (11.59s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-777000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-777000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.267736208s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-777000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-777000" primary control-plane node in "force-systemd-flag-777000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-777000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:32:27.757716    4663 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:32:27.757831    4663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:32:27.757835    4663 out.go:304] Setting ErrFile to fd 2...
	I0728 18:32:27.757837    4663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:32:27.757972    4663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:32:27.759069    4663 out.go:298] Setting JSON to false
	I0728 18:32:27.774785    4663 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3718,"bootTime":1722213029,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:32:27.774853    4663 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:32:27.779987    4663 out.go:177] * [force-systemd-flag-777000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:32:27.787012    4663 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:32:27.787051    4663 notify.go:220] Checking for updates...
	I0728 18:32:27.795860    4663 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:32:27.799981    4663 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:32:27.802988    4663 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:32:27.805975    4663 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:32:27.808984    4663 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:32:27.812317    4663 config.go:182] Loaded profile config "force-systemd-env-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:32:27.812391    4663 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:32:27.812437    4663 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:32:27.815953    4663 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:32:27.822982    4663 start.go:297] selected driver: qemu2
	I0728 18:32:27.822989    4663 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:32:27.823003    4663 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:32:27.825346    4663 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:32:27.828969    4663 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:32:27.832094    4663 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0728 18:32:27.832131    4663 cni.go:84] Creating CNI manager for ""
	I0728 18:32:27.832138    4663 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:32:27.832143    4663 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 18:32:27.832174    4663 start.go:340] cluster config:
	{Name:force-systemd-flag-777000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:32:27.835824    4663 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:32:27.843984    4663 out.go:177] * Starting "force-systemd-flag-777000" primary control-plane node in "force-systemd-flag-777000" cluster
	I0728 18:32:27.848035    4663 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:32:27.848048    4663 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:32:27.848059    4663 cache.go:56] Caching tarball of preloaded images
	I0728 18:32:27.848113    4663 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:32:27.848119    4663 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:32:27.848181    4663 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/force-systemd-flag-777000/config.json ...
	I0728 18:32:27.848192    4663 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/force-systemd-flag-777000/config.json: {Name:mk4ddeca1aee9e72367aa52cd44ff7b71ac58bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:32:27.848440    4663 start.go:360] acquireMachinesLock for force-systemd-flag-777000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:32:29.099910    4663 start.go:364] duration metric: took 1.251445583s to acquireMachinesLock for "force-systemd-flag-777000"
	I0728 18:32:29.100104    4663 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-777000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:32:29.100296    4663 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:32:29.109400    4663 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0728 18:32:29.158026    4663 start.go:159] libmachine.API.Create for "force-systemd-flag-777000" (driver="qemu2")
	I0728 18:32:29.158067    4663 client.go:168] LocalClient.Create starting
	I0728 18:32:29.158209    4663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:32:29.158275    4663 main.go:141] libmachine: Decoding PEM data...
	I0728 18:32:29.158293    4663 main.go:141] libmachine: Parsing certificate...
	I0728 18:32:29.158358    4663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:32:29.158403    4663 main.go:141] libmachine: Decoding PEM data...
	I0728 18:32:29.158419    4663 main.go:141] libmachine: Parsing certificate...
	I0728 18:32:29.159058    4663 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:32:29.486633    4663 main.go:141] libmachine: Creating SSH key...
	I0728 18:32:29.565016    4663 main.go:141] libmachine: Creating Disk image...
	I0728 18:32:29.565024    4663 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:32:29.568122    4663 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/disk.qcow2
	I0728 18:32:29.578181    4663 main.go:141] libmachine: STDOUT: 
	I0728 18:32:29.578206    4663 main.go:141] libmachine: STDERR: 
	I0728 18:32:29.578264    4663 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/disk.qcow2 +20000M
	I0728 18:32:29.586638    4663 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:32:29.586655    4663 main.go:141] libmachine: STDERR: 
	I0728 18:32:29.586676    4663 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/disk.qcow2
	I0728 18:32:29.586682    4663 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:32:29.586699    4663 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:32:29.586725    4663 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:39:de:eb:f7:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/disk.qcow2
	I0728 18:32:29.588383    4663 main.go:141] libmachine: STDOUT: 
	I0728 18:32:29.588397    4663 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:32:29.588414    4663 client.go:171] duration metric: took 430.342ms to LocalClient.Create
	I0728 18:32:31.590588    4663 start.go:128] duration metric: took 2.490239875s to createHost
	I0728 18:32:31.590634    4663 start.go:83] releasing machines lock for "force-systemd-flag-777000", held for 2.490669833s
	W0728 18:32:31.590696    4663 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:32:31.606628    4663 out.go:177] * Deleting "force-systemd-flag-777000" in qemu2 ...
	W0728 18:32:31.628517    4663 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:32:31.628539    4663 start.go:729] Will try again in 5 seconds ...
	I0728 18:32:36.630782    4663 start.go:360] acquireMachinesLock for force-systemd-flag-777000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:32:36.631358    4663 start.go:364] duration metric: took 363.542µs to acquireMachinesLock for "force-systemd-flag-777000"
	I0728 18:32:36.631505    4663 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-777000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:32:36.631750    4663 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:32:36.651529    4663 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0728 18:32:36.701632    4663 start.go:159] libmachine.API.Create for "force-systemd-flag-777000" (driver="qemu2")
	I0728 18:32:36.701676    4663 client.go:168] LocalClient.Create starting
	I0728 18:32:36.701783    4663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:32:36.701849    4663 main.go:141] libmachine: Decoding PEM data...
	I0728 18:32:36.701866    4663 main.go:141] libmachine: Parsing certificate...
	I0728 18:32:36.701934    4663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:32:36.701979    4663 main.go:141] libmachine: Decoding PEM data...
	I0728 18:32:36.701991    4663 main.go:141] libmachine: Parsing certificate...
	I0728 18:32:36.702501    4663 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:32:36.859907    4663 main.go:141] libmachine: Creating SSH key...
	I0728 18:32:36.934100    4663 main.go:141] libmachine: Creating Disk image...
	I0728 18:32:36.934105    4663 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:32:36.934312    4663 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/disk.qcow2
	I0728 18:32:36.943325    4663 main.go:141] libmachine: STDOUT: 
	I0728 18:32:36.943411    4663 main.go:141] libmachine: STDERR: 
	I0728 18:32:36.943465    4663 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/disk.qcow2 +20000M
	I0728 18:32:36.951241    4663 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:32:36.951339    4663 main.go:141] libmachine: STDERR: 
	I0728 18:32:36.951350    4663 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/disk.qcow2
	I0728 18:32:36.951355    4663 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:32:36.951366    4663 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:32:36.951390    4663 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:fd:7b:a6:81:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-flag-777000/disk.qcow2
	I0728 18:32:36.952962    4663 main.go:141] libmachine: STDOUT: 
	I0728 18:32:36.952982    4663 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:32:36.952996    4663 client.go:171] duration metric: took 251.313959ms to LocalClient.Create
	I0728 18:32:38.955184    4663 start.go:128] duration metric: took 2.323391417s to createHost
	I0728 18:32:38.955223    4663 start.go:83] releasing machines lock for "force-systemd-flag-777000", held for 2.323840541s
	W0728 18:32:38.955545    4663 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-777000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-777000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:32:38.973145    4663 out.go:177] 
	W0728 18:32:38.978083    4663 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:32:38.978125    4663 out.go:239] * 
	* 
	W0728 18:32:38.980689    4663 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:32:38.990025    4663 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-777000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-777000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-777000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (94.170958ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-777000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-777000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-777000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-28 18:32:39.095154 -0700 PDT m=+2809.651791793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-777000 -n force-systemd-flag-777000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-777000 -n force-systemd-flag-777000: exit status 7 (36.482208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-777000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-777000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-777000
--- FAIL: TestForceSystemdFlag (11.59s)

                                                
                                    
x
+
TestForceSystemdEnv (10.17s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-878000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-878000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.850077333s)

                                                
                                                
-- stdout --
	* [force-systemd-env-878000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-878000" primary control-plane node in "force-systemd-env-878000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-878000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:32:19.319477    4625 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:32:19.319596    4625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:32:19.319608    4625 out.go:304] Setting ErrFile to fd 2...
	I0728 18:32:19.319610    4625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:32:19.319749    4625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:32:19.320822    4625 out.go:298] Setting JSON to false
	I0728 18:32:19.337075    4625 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3710,"bootTime":1722213029,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:32:19.337147    4625 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:32:19.343156    4625 out.go:177] * [force-systemd-env-878000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:32:19.351106    4625 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:32:19.351169    4625 notify.go:220] Checking for updates...
	I0728 18:32:19.358047    4625 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:32:19.361092    4625 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:32:19.364106    4625 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:32:19.367061    4625 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:32:19.370077    4625 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0728 18:32:19.373386    4625 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:32:19.373433    4625 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:32:19.381011    4625 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:32:19.392063    4625 start.go:297] selected driver: qemu2
	I0728 18:32:19.392069    4625 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:32:19.392075    4625 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:32:19.394577    4625 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:32:19.397140    4625 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:32:19.400119    4625 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0728 18:32:19.400165    4625 cni.go:84] Creating CNI manager for ""
	I0728 18:32:19.400178    4625 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:32:19.400184    4625 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 18:32:19.400210    4625 start.go:340] cluster config:
	{Name:force-systemd-env-878000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:32:19.404009    4625 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:32:19.410947    4625 out.go:177] * Starting "force-systemd-env-878000" primary control-plane node in "force-systemd-env-878000" cluster
	I0728 18:32:19.415070    4625 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:32:19.415087    4625 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:32:19.415104    4625 cache.go:56] Caching tarball of preloaded images
	I0728 18:32:19.415176    4625 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:32:19.415183    4625 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:32:19.415271    4625 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/force-systemd-env-878000/config.json ...
	I0728 18:32:19.415285    4625 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/force-systemd-env-878000/config.json: {Name:mk7d6acd35dc24496f72ee0804c9b2d1ab05678b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:32:19.415534    4625 start.go:360] acquireMachinesLock for force-systemd-env-878000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:32:19.415573    4625 start.go:364] duration metric: took 30.042µs to acquireMachinesLock for "force-systemd-env-878000"
	I0728 18:32:19.415587    4625 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:32:19.415620    4625 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:32:19.424049    4625 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0728 18:32:19.442494    4625 start.go:159] libmachine.API.Create for "force-systemd-env-878000" (driver="qemu2")
	I0728 18:32:19.442554    4625 client.go:168] LocalClient.Create starting
	I0728 18:32:19.442628    4625 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:32:19.442661    4625 main.go:141] libmachine: Decoding PEM data...
	I0728 18:32:19.442671    4625 main.go:141] libmachine: Parsing certificate...
	I0728 18:32:19.442714    4625 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:32:19.442738    4625 main.go:141] libmachine: Decoding PEM data...
	I0728 18:32:19.442751    4625 main.go:141] libmachine: Parsing certificate...
	I0728 18:32:19.443114    4625 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:32:19.590249    4625 main.go:141] libmachine: Creating SSH key...
	I0728 18:32:19.738882    4625 main.go:141] libmachine: Creating Disk image...
	I0728 18:32:19.738892    4625 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:32:19.739139    4625 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/disk.qcow2
	I0728 18:32:19.748328    4625 main.go:141] libmachine: STDOUT: 
	I0728 18:32:19.748345    4625 main.go:141] libmachine: STDERR: 
	I0728 18:32:19.748410    4625 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/disk.qcow2 +20000M
	I0728 18:32:19.756267    4625 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:32:19.756283    4625 main.go:141] libmachine: STDERR: 
	I0728 18:32:19.756304    4625 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/disk.qcow2
	I0728 18:32:19.756308    4625 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:32:19.756318    4625 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:32:19.756343    4625 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:75:89:60:60:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/disk.qcow2
	I0728 18:32:19.757961    4625 main.go:141] libmachine: STDOUT: 
	I0728 18:32:19.757973    4625 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:32:19.757989    4625 client.go:171] duration metric: took 315.428709ms to LocalClient.Create
	I0728 18:32:21.760181    4625 start.go:128] duration metric: took 2.344537625s to createHost
	I0728 18:32:21.760248    4625 start.go:83] releasing machines lock for "force-systemd-env-878000", held for 2.344666667s
	W0728 18:32:21.760355    4625 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:32:21.771676    4625 out.go:177] * Deleting "force-systemd-env-878000" in qemu2 ...
	W0728 18:32:21.803776    4625 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:32:21.803809    4625 start.go:729] Will try again in 5 seconds ...
	I0728 18:32:26.805983    4625 start.go:360] acquireMachinesLock for force-systemd-env-878000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:32:26.806542    4625 start.go:364] duration metric: took 457.667µs to acquireMachinesLock for "force-systemd-env-878000"
	I0728 18:32:26.806685    4625 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:32:26.806908    4625 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:32:26.812650    4625 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0728 18:32:26.865274    4625 start.go:159] libmachine.API.Create for "force-systemd-env-878000" (driver="qemu2")
	I0728 18:32:26.865324    4625 client.go:168] LocalClient.Create starting
	I0728 18:32:26.865417    4625 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:32:26.865485    4625 main.go:141] libmachine: Decoding PEM data...
	I0728 18:32:26.865501    4625 main.go:141] libmachine: Parsing certificate...
	I0728 18:32:26.865555    4625 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:32:26.865597    4625 main.go:141] libmachine: Decoding PEM data...
	I0728 18:32:26.865612    4625 main.go:141] libmachine: Parsing certificate...
	I0728 18:32:26.866100    4625 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:32:27.022828    4625 main.go:141] libmachine: Creating SSH key...
	I0728 18:32:27.077942    4625 main.go:141] libmachine: Creating Disk image...
	I0728 18:32:27.077948    4625 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:32:27.078168    4625 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/disk.qcow2
	I0728 18:32:27.087587    4625 main.go:141] libmachine: STDOUT: 
	I0728 18:32:27.087604    4625 main.go:141] libmachine: STDERR: 
	I0728 18:32:27.087654    4625 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/disk.qcow2 +20000M
	I0728 18:32:27.095795    4625 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:32:27.095810    4625 main.go:141] libmachine: STDERR: 
	I0728 18:32:27.095823    4625 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/disk.qcow2
	I0728 18:32:27.095828    4625 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:32:27.095852    4625 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:32:27.095877    4625 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:9a:bb:71:ac:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/force-systemd-env-878000/disk.qcow2
	I0728 18:32:27.097490    4625 main.go:141] libmachine: STDOUT: 
	I0728 18:32:27.097512    4625 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:32:27.097523    4625 client.go:171] duration metric: took 232.194208ms to LocalClient.Create
	I0728 18:32:29.099726    4625 start.go:128] duration metric: took 2.29275475s to createHost
	I0728 18:32:29.099774    4625 start.go:83] releasing machines lock for "force-systemd-env-878000", held for 2.293184834s
	W0728 18:32:29.100043    4625 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-878000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-878000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:32:29.113590    4625 out.go:177] 
	W0728 18:32:29.118669    4625 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:32:29.118725    4625 out.go:239] * 
	* 
	W0728 18:32:29.120752    4625 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:32:29.130471    4625 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-878000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-878000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-878000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (89.765291ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-878000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-878000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-28 18:32:29.233728 -0700 PDT m=+2799.790361876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-878000 -n force-systemd-env-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-878000 -n force-systemd-env-878000: exit status 7 (35.925458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-878000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-878000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-878000
--- FAIL: TestForceSystemdEnv (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (36.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-843000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-843000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-blgq2" [7e1853a3-992c-4b5f-9084-626f6884bf4e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-blgq2" [7e1853a3-992c-4b5f-9084-626f6884bf4e] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003832333s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:30465
functional_test.go:1661: error fetching http://192.168.105.4:30465: Get "http://192.168.105.4:30465": dial tcp 192.168.105.4:30465: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30465: Get "http://192.168.105.4:30465": dial tcp 192.168.105.4:30465: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30465: Get "http://192.168.105.4:30465": dial tcp 192.168.105.4:30465: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30465: Get "http://192.168.105.4:30465": dial tcp 192.168.105.4:30465: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30465: Get "http://192.168.105.4:30465": dial tcp 192.168.105.4:30465: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30465: Get "http://192.168.105.4:30465": dial tcp 192.168.105.4:30465: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30465: Get "http://192.168.105.4:30465": dial tcp 192.168.105.4:30465: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30465: Get "http://192.168.105.4:30465": dial tcp 192.168.105.4:30465: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:30465: Get "http://192.168.105.4:30465": dial tcp 192.168.105.4:30465: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-843000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-blgq2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-843000/192.168.105.4
Start Time:       Sun, 28 Jul 2024 17:56:48 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://23d29488b27ab8c3c166a62b21654a4c067b8dd3d4e318140fd16b1ffe8fb818
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Sun, 28 Jul 2024 17:57:05 -0700
Finished:     Sun, 28 Jul 2024 17:57:05 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t2fq2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-t2fq2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  35s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-blgq2 to functional-843000
Normal   Pulled     18s (x3 over 35s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    18s (x3 over 35s)  kubelet            Created container echoserver-arm
Normal   Started    18s (x3 over 35s)  kubelet            Started container echoserver-arm
Warning  BackOff    6s (x3 over 33s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-blgq2_default(7e1853a3-992c-4b5f-9084-626f6884bf4e)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-843000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-843000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.138.99
IPs:                      10.100.138.99
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30465/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-843000 -n functional-843000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-843000 ssh -- ls                                                                                         | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:57 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-843000 ssh cat                                                                                           | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:57 PDT |
	|           | /mount-9p/test-1722214631226537000                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-843000 ssh stat                                                                                          | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:57 PDT |
	|           | /mount-9p/created-by-test                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-843000 ssh stat                                                                                          | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:57 PDT |
	|           | /mount-9p/created-by-pod                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-843000 ssh sudo                                                                                          | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:57 PDT |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-843000 ssh findmnt                                                                                       | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-843000                                                                                                | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port115686256/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-843000 ssh findmnt                                                                                       | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:57 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-843000 ssh -- ls                                                                                         | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:57 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-843000 ssh sudo                                                                                          | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT |                     |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-843000                                                                                                | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup456087409/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-843000                                                                                                | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup456087409/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-843000                                                                                                | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup456087409/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-843000 ssh findmnt                                                                                       | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-843000 ssh findmnt                                                                                       | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:57 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-843000 ssh findmnt                                                                                       | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:57 PDT |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-843000 ssh findmnt                                                                                       | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT |                     |
	|           | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-843000 ssh findmnt                                                                                       | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:57 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-843000 ssh findmnt                                                                                       | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:57 PDT |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-843000 ssh findmnt                                                                                       | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT | 28 Jul 24 17:57 PDT |
	|           | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| mount     | -p functional-843000                                                                                                | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT |                     |
	|           | --kill=true                                                                                                         |                   |         |         |                     |                     |
	| start     | -p functional-843000                                                                                                | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-843000 --dry-run                                                                                      | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-843000                                                                                                | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                  | functional-843000 | jenkins | v1.33.1 | 28 Jul 24 17:57 PDT |                     |
	|           | -p functional-843000                                                                                                |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/28 17:57:18
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 17:57:18.322956    2667 out.go:291] Setting OutFile to fd 1 ...
	I0728 17:57:18.323086    2667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:57:18.323089    2667 out.go:304] Setting ErrFile to fd 2...
	I0728 17:57:18.323091    2667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:57:18.323225    2667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 17:57:18.324654    2667 out.go:298] Setting JSON to false
	I0728 17:57:18.341567    2667 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1609,"bootTime":1722213029,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 17:57:18.341653    2667 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 17:57:18.345591    2667 out.go:177] * [functional-843000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0728 17:57:18.352670    2667 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 17:57:18.352723    2667 notify.go:220] Checking for updates...
	I0728 17:57:18.359571    2667 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 17:57:18.362643    2667 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 17:57:18.366559    2667 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 17:57:18.369608    2667 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 17:57:18.372633    2667 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 17:57:18.375823    2667 config.go:182] Loaded profile config "functional-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 17:57:18.376070    2667 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 17:57:18.379605    2667 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0728 17:57:18.385596    2667 start.go:297] selected driver: qemu2
	I0728 17:57:18.385602    2667 start.go:901] validating driver "qemu2" against &{Name:functional-843000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-843000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:57:18.385681    2667 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 17:57:18.391612    2667 out.go:177] 
	W0728 17:57:18.395667    2667 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0728 17:57:18.399614    2667 out.go:177] 
	
	
	==> Docker <==
	Jul 29 00:57:13 functional-843000 dockerd[5809]: time="2024-07-29T00:57:13.558604027Z" level=warning msg="cleanup warnings time=\"2024-07-29T00:57:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 29 00:57:14 functional-843000 dockerd[5809]: time="2024-07-29T00:57:14.670386047Z" level=info msg="shim disconnected" id=476a39f82d25deae0d61d12a8b70f8aa046bf67c20209ea7ca7d69f66a6a2069 namespace=moby
	Jul 29 00:57:14 functional-843000 dockerd[5809]: time="2024-07-29T00:57:14.670415975Z" level=warning msg="cleaning up after shim disconnected" id=476a39f82d25deae0d61d12a8b70f8aa046bf67c20209ea7ca7d69f66a6a2069 namespace=moby
	Jul 29 00:57:14 functional-843000 dockerd[5809]: time="2024-07-29T00:57:14.670420269Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:57:14 functional-843000 dockerd[5803]: time="2024-07-29T00:57:14.670504385Z" level=info msg="ignoring event" container=476a39f82d25deae0d61d12a8b70f8aa046bf67c20209ea7ca7d69f66a6a2069 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:57:19 functional-843000 dockerd[5809]: time="2024-07-29T00:57:19.285542553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:19 functional-843000 dockerd[5809]: time="2024-07-29T00:57:19.285600992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:19 functional-843000 dockerd[5809]: time="2024-07-29T00:57:19.285613872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:19 functional-843000 dockerd[5809]: time="2024-07-29T00:57:19.285652387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:19 functional-843000 dockerd[5809]: time="2024-07-29T00:57:19.289334657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:19 functional-843000 dockerd[5809]: time="2024-07-29T00:57:19.289370754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:19 functional-843000 dockerd[5809]: time="2024-07-29T00:57:19.289378716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:19 functional-843000 dockerd[5809]: time="2024-07-29T00:57:19.289518937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:19 functional-843000 cri-dockerd[6064]: time="2024-07-29T00:57:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/01a765071cc2e24736e609e0d8b5755c2b46be93137923c6da7bab0429bfd2a9/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 29 00:57:19 functional-843000 cri-dockerd[6064]: time="2024-07-29T00:57:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/740fac0b00c63dfe5945899991caf23d4cd3faa7e9d85ab98a3ce580b5dd2ea8/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 29 00:57:19 functional-843000 dockerd[5803]: time="2024-07-29T00:57:19.584557479Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Jul 29 00:57:22 functional-843000 dockerd[5809]: time="2024-07-29T00:57:22.260664995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 00:57:22 functional-843000 dockerd[5809]: time="2024-07-29T00:57:22.260724560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 00:57:22 functional-843000 dockerd[5809]: time="2024-07-29T00:57:22.260737898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:22 functional-843000 dockerd[5809]: time="2024-07-29T00:57:22.260772120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 00:57:22 functional-843000 dockerd[5809]: time="2024-07-29T00:57:22.289756138Z" level=info msg="shim disconnected" id=2a4419a48c853245d4165fc13e412818c890de4537e9aac535b3bbc245f0e124 namespace=moby
	Jul 29 00:57:22 functional-843000 dockerd[5809]: time="2024-07-29T00:57:22.289788275Z" level=warning msg="cleaning up after shim disconnected" id=2a4419a48c853245d4165fc13e412818c890de4537e9aac535b3bbc245f0e124 namespace=moby
	Jul 29 00:57:22 functional-843000 dockerd[5809]: time="2024-07-29T00:57:22.289792610Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 00:57:22 functional-843000 dockerd[5803]: time="2024-07-29T00:57:22.289774770Z" level=info msg="ignoring event" container=2a4419a48c853245d4165fc13e412818c890de4537e9aac535b3bbc245f0e124 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 00:57:22 functional-843000 dockerd[5809]: time="2024-07-29T00:57:22.297798491Z" level=warning msg="cleanup warnings time=\"2024-07-29T00:57:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2a4419a48c853       72565bf5bbedf                                                                                         1 second ago         Exited              echoserver-arm            3                   7150eb0f0c4c0       hello-node-65f5d5cc78-9kdn9
	5d026320c0913       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 seconds ago       Exited              mount-munger              0                   476a39f82d25d       busybox-mount
	23d29488b27ab       72565bf5bbedf                                                                                         18 seconds ago       Exited              echoserver-arm            2                   e8c402a6b9b14       hello-node-connect-6f49f58cd5-blgq2
	a9f861e254fed       nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c                         19 seconds ago       Running             myfrontend                0                   b5a7fbe232ad3       sp-pod
	8081603ec4ef7       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         42 seconds ago       Running             nginx                     0                   8e23937d6ba58       nginx-svc
	55a36efdce602       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   65682ecc7ef3b       storage-provisioner
	aab35e931facf       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   126cac11a94ec       coredns-7db6d8ff4d-gjvlj
	984eec4a17e81       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   65682ecc7ef3b       storage-provisioner
	a2788ce436388       2351f570ed0ea                                                                                         About a minute ago   Running             kube-proxy                2                   61475a4ce9a13       kube-proxy-rt8cg
	2f7ad93841d6a       d48f992a22722                                                                                         About a minute ago   Running             kube-scheduler            2                   1b3d29dc931cc       kube-scheduler-functional-843000
	f85d2bae8afee       8e97cdb19e7cc                                                                                         About a minute ago   Running             kube-controller-manager   2                   d683e54a0263a       kube-controller-manager-functional-843000
	e9d800972f3be       014faa467e297                                                                                         About a minute ago   Running             etcd                      2                   d6ff001c26a17       etcd-functional-843000
	05f042a3c42f2       61773190d42ff                                                                                         About a minute ago   Running             kube-apiserver            0                   d11a48b0532ed       kube-apiserver-functional-843000
	a5e4291a025ca       2437cf7621777                                                                                         About a minute ago   Exited              coredns                   1                   00787865d2cca       coredns-7db6d8ff4d-gjvlj
	c00f581879fc9       2351f570ed0ea                                                                                         About a minute ago   Exited              kube-proxy                1                   2ded6aa2f1e8f       kube-proxy-rt8cg
	85ac202361cf1       8e97cdb19e7cc                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   23ac56de48727       kube-controller-manager-functional-843000
	c654fa833e466       d48f992a22722                                                                                         2 minutes ago        Exited              kube-scheduler            1                   3d22ae0efd89e       kube-scheduler-functional-843000
	60adcd24ee04d       014faa467e297                                                                                         2 minutes ago        Exited              etcd                      1                   6ebd0b9d7e183       etcd-functional-843000
	
	
	==> coredns [a5e4291a025c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57020 - 27157 "HINFO IN 2703689441100981899.3245340760478312071. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005337039s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [aab35e931fac] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59103 - 40896 "HINFO IN 2165583336686806189.3275938784900383937. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004406577s
	[INFO] 10.244.0.1:49235 - 28867 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000128387s
	[INFO] 10.244.0.1:27245 - 52805 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000092247s
	[INFO] 10.244.0.1:18315 - 34201 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000048937s
	[INFO] 10.244.0.1:52638 - 33645 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.00100242s
	[INFO] 10.244.0.1:25419 - 61530 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000100333s
	[INFO] 10.244.0.1:12832 - 11280 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000126303s
	
	
	==> describe nodes <==
	Name:               functional-843000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-843000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=functional-843000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_28T17_54_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 00:54:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-843000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 00:57:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 00:57:08 +0000   Mon, 29 Jul 2024 00:54:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 00:57:08 +0000   Mon, 29 Jul 2024 00:54:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 00:57:08 +0000   Mon, 29 Jul 2024 00:54:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 00:57:08 +0000   Mon, 29 Jul 2024 00:54:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-843000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 27bd8cac1ce549e99243a185d501291e
	  System UUID:                27bd8cac1ce549e99243a185d501291e
	  Boot ID:                    93e0e5fd-645d-40df-8350-4133d6e1698c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-9kdn9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  default                     hello-node-connect-6f49f58cd5-blgq2          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kube-system                 coredns-7db6d8ff4d-gjvlj                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m19s
	  kube-system                 etcd-functional-843000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m34s
	  kube-system                 kube-apiserver-functional-843000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-functional-843000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-proxy-rt8cg                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 kube-scheduler-functional-843000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-mx2sj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-fl4d6        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m19s                  kube-proxy       
	  Normal  Starting                 75s                    kube-proxy       
	  Normal  Starting                 119s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m34s (x2 over 2m34s)  kubelet          Node functional-843000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m34s (x2 over 2m34s)  kubelet          Node functional-843000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m34s (x2 over 2m34s)  kubelet          Node functional-843000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m34s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m30s                  kubelet          Node functional-843000 status is now: NodeReady
	  Normal  RegisteredNode           2m20s                  node-controller  Node functional-843000 event: Registered Node functional-843000 in Controller
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)    kubelet          Node functional-843000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)    kubelet          Node functional-843000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m2s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m2s (x7 over 2m2s)    kubelet          Node functional-843000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m2s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           107s                   node-controller  Node functional-843000 event: Registered Node functional-843000 in Controller
	  Normal  Starting                 78s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  78s (x8 over 78s)      kubelet          Node functional-843000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s (x8 over 78s)      kubelet          Node functional-843000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s (x7 over 78s)      kubelet          Node functional-843000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           64s                    node-controller  Node functional-843000 event: Registered Node functional-843000 in Controller
	
	
	==> dmesg <==
	[  +3.903873] kauditd_printk_skb: 115 callbacks suppressed
	[ +15.053947] systemd-fstab-generator[4901]: Ignoring "noauto" option for root device
	[ +10.463149] systemd-fstab-generator[5327]: Ignoring "noauto" option for root device
	[  +0.054972] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.101390] systemd-fstab-generator[5361]: Ignoring "noauto" option for root device
	[  +0.112750] systemd-fstab-generator[5373]: Ignoring "noauto" option for root device
	[  +0.107543] systemd-fstab-generator[5387]: Ignoring "noauto" option for root device
	[  +5.114605] kauditd_printk_skb: 91 callbacks suppressed
	[Jul29 00:56] systemd-fstab-generator[6016]: Ignoring "noauto" option for root device
	[  +0.087123] systemd-fstab-generator[6028]: Ignoring "noauto" option for root device
	[  +0.087370] systemd-fstab-generator[6041]: Ignoring "noauto" option for root device
	[  +0.084413] systemd-fstab-generator[6056]: Ignoring "noauto" option for root device
	[  +0.214697] systemd-fstab-generator[6221]: Ignoring "noauto" option for root device
	[  +1.168719] systemd-fstab-generator[6349]: Ignoring "noauto" option for root device
	[  +3.394196] kauditd_printk_skb: 199 callbacks suppressed
	[ +11.470837] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.189825] systemd-fstab-generator[7437]: Ignoring "noauto" option for root device
	[  +4.577382] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.117632] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.605348] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.811351] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.036408] kauditd_printk_skb: 32 callbacks suppressed
	[Jul29 00:57] kauditd_printk_skb: 1 callbacks suppressed
	[  +9.780418] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.552394] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [60adcd24ee04] <==
	{"level":"info","ts":"2024-07-29T00:55:22.016418Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T00:55:23.380985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T00:55:23.381129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T00:55:23.38117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-07-29T00:55:23.381202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T00:55:23.381217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-29T00:55:23.381242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T00:55:23.381266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-29T00:55:23.386176Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-843000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T00:55:23.386244Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T00:55:23.386945Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T00:55:23.391233Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T00:55:23.393885Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-29T00:55:23.39478Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T00:55:23.394816Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T00:55:51.070219Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T00:55:51.070247Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-843000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-07-29T00:55:51.070294Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T00:55:51.070333Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T00:55:51.089402Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T00:55:51.089448Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T00:55:51.089472Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-07-29T00:55:51.090834Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T00:55:51.090862Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T00:55:51.090865Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-843000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [e9d800972f3b] <==
	{"level":"info","ts":"2024-07-29T00:56:05.847552Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T00:56:05.847615Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T00:56:05.847694Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T00:56:05.847724Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T00:56:05.847741Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T00:56:05.847872Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T00:56:05.847891Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T00:56:05.848494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-07-29T00:56:05.848534Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-07-29T00:56:05.848592Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T00:56:05.84862Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T00:56:06.926409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-29T00:56:06.926589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-29T00:56:06.926657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-29T00:56:06.926866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-07-29T00:56:06.926889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-29T00:56:06.926915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-07-29T00:56:06.926934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-29T00:56:06.929515Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-843000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T00:56:06.929652Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T00:56:06.929834Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T00:56:06.930394Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T00:56:06.930424Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T00:56:06.93406Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T00:56:06.9341Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> kernel <==
	 00:57:23 up 2 min,  0 users,  load average: 1.19, 0.48, 0.18
	Linux functional-843000 5.10.207 #1 SMP PREEMPT Tue Jul 23 01:19:38 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [05f042a3c42f] <==
	I0729 00:56:07.561753       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 00:56:07.561789       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 00:56:07.561835       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 00:56:07.561858       1 aggregator.go:165] initial CRD sync complete...
	I0729 00:56:07.561875       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 00:56:07.561888       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 00:56:07.561911       1 cache.go:39] Caches are synced for autoregister controller
	I0729 00:56:07.588708       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 00:56:08.463707       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 00:56:08.566435       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0729 00:56:08.567001       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 00:56:08.569210       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 00:56:08.824556       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 00:56:08.828735       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 00:56:08.838962       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 00:56:08.847354       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 00:56:08.849289       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 00:56:28.673836       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.109.14.39"}
	I0729 00:56:33.750128       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0729 00:56:33.792906       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.106.77"}
	I0729 00:56:37.809890       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.92.161"}
	I0729 00:56:48.215594       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.138.99"}
	I0729 00:57:18.874750       1 controller.go:615] quota admission added evaluator for: namespaces
	I0729 00:57:18.977468       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.106.135"}
	I0729 00:57:18.987676       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.38.88"}
	
	
	==> kube-controller-manager [85ac202361cf] <==
	I0729 00:55:36.821132       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0729 00:55:36.821183       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0729 00:55:36.823185       1 shared_informer.go:320] Caches are synced for namespace
	I0729 00:55:36.825500       1 shared_informer.go:320] Caches are synced for node
	I0729 00:55:36.825545       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0729 00:55:36.825558       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0729 00:55:36.825564       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0729 00:55:36.825567       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0729 00:55:36.826698       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 00:55:36.829375       1 shared_informer.go:320] Caches are synced for TTL
	I0729 00:55:36.876014       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 00:55:36.878097       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 00:55:36.884002       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 00:55:36.885157       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 00:55:36.929336       1 shared_informer.go:320] Caches are synced for expand
	I0729 00:55:36.970147       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 00:55:36.973280       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 00:55:36.978751       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 00:55:37.024632       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 00:55:37.027862       1 shared_informer.go:320] Caches are synced for disruption
	I0729 00:55:37.027870       1 shared_informer.go:320] Caches are synced for deployment
	I0729 00:55:37.032092       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 00:55:37.443903       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 00:55:37.475398       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 00:55:37.475436       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [f85d2bae8afe] <==
	I0729 00:56:48.189954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="105.294µs"
	I0729 00:56:49.459025       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="16.799µs"
	I0729 00:56:50.464159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="35.056µs"
	I0729 00:56:53.486186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="23.343µs"
	I0729 00:57:05.149245       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="25.885µs"
	I0729 00:57:05.553406       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="22.509µs"
	I0729 00:57:08.143542       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="37.014µs"
	I0729 00:57:17.144910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="22.842µs"
	I0729 00:57:18.916915       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="24.875784ms"
	E0729 00:57:18.917058       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 00:57:18.926818       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="29.383708ms"
	E0729 00:57:18.927055       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 00:57:18.935233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="18.108936ms"
	E0729 00:57:18.936010       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 00:57:18.942590       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="15.4727ms"
	E0729 00:57:18.942627       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 00:57:18.953116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="17.074325ms"
	I0729 00:57:18.955204       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="12.555521ms"
	I0729 00:57:18.961675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="8.508443ms"
	I0729 00:57:18.963362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="8.124752ms"
	I0729 00:57:18.963409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="31.762µs"
	I0729 00:57:18.965067       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="12.463µs"
	I0729 00:57:18.972413       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="10.709385ms"
	I0729 00:57:18.972460       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="15.589µs"
	I0729 00:57:22.659662       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="25.343µs"
	
	
	==> kube-proxy [a2788ce43638] <==
	I0729 00:56:08.658214       1 server_linux.go:69] "Using iptables proxy"
	I0729 00:56:08.690904       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0729 00:56:08.772025       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 00:56:08.772045       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 00:56:08.772055       1 server_linux.go:165] "Using iptables Proxier"
	I0729 00:56:08.772792       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 00:56:08.772856       1 server.go:872] "Version info" version="v1.30.3"
	I0729 00:56:08.772865       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 00:56:08.773315       1 config.go:319] "Starting node config controller"
	I0729 00:56:08.773337       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 00:56:08.773401       1 config.go:192] "Starting service config controller"
	I0729 00:56:08.773405       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 00:56:08.773411       1 config.go:101] "Starting endpoint slice config controller"
	I0729 00:56:08.773414       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 00:56:08.874137       1 shared_informer.go:320] Caches are synced for node config
	I0729 00:56:08.874137       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 00:56:08.874151       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [c00f581879fc] <==
	I0729 00:55:24.716067       1 server_linux.go:69] "Using iptables proxy"
	I0729 00:55:24.746993       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0729 00:55:24.762257       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 00:55:24.762278       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 00:55:24.762288       1 server_linux.go:165] "Using iptables Proxier"
	I0729 00:55:24.763758       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 00:55:24.763835       1 server.go:872] "Version info" version="v1.30.3"
	I0729 00:55:24.763846       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 00:55:24.764302       1 config.go:192] "Starting service config controller"
	I0729 00:55:24.764312       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 00:55:24.764322       1 config.go:101] "Starting endpoint slice config controller"
	I0729 00:55:24.764325       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 00:55:24.764511       1 config.go:319] "Starting node config controller"
	I0729 00:55:24.764514       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 00:55:24.865029       1 shared_informer.go:320] Caches are synced for node config
	I0729 00:55:24.865050       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 00:55:24.865035       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [2f7ad93841d6] <==
	I0729 00:56:06.242318       1 serving.go:380] Generated self-signed cert in-memory
	W0729 00:56:07.482853       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 00:56:07.482867       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 00:56:07.482872       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 00:56:07.482875       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 00:56:07.506862       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 00:56:07.506952       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 00:56:07.507777       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 00:56:07.507837       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 00:56:07.507873       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 00:56:07.507896       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 00:56:07.608090       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c654fa833e46] <==
	E0729 00:55:24.006258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0729 00:55:24.006292       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0729 00:55:24.006341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0729 00:55:24.006376       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0729 00:55:24.006403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0729 00:55:24.006451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0729 00:55:24.006471       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0729 00:55:24.006520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0729 00:55:24.006542       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0729 00:55:24.006571       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0729 00:55:24.006602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0729 00:55:24.006633       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0729 00:55:24.006653       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0729 00:55:24.006718       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0729 00:55:24.006744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0729 00:55:24.006790       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0729 00:55:24.006808       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0729 00:55:24.006838       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0729 00:55:24.006869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0729 00:55:24.006900       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0729 00:55:24.006922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0729 00:55:24.007016       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	E0729 00:55:24.007051       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	I0729 00:55:25.099968       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 00:55:51.075846       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 00:57:08 functional-843000 kubelet[6356]: I0729 00:57:08.142943    6356 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=4.422978485 podStartE2EDuration="5.142930479s" podCreationTimestamp="2024-07-29 00:57:03 +0000 UTC" firstStartedPulling="2024-07-29 00:57:04.009794586 +0000 UTC m=+58.924413131" lastFinishedPulling="2024-07-29 00:57:04.729746538 +0000 UTC m=+59.644365125" observedRunningTime="2024-07-29 00:57:05.560724696 +0000 UTC m=+60.475343283" watchObservedRunningTime="2024-07-29 00:57:08.142930479 +0000 UTC m=+63.057549065"
	Jul 29 00:57:12 functional-843000 kubelet[6356]: I0729 00:57:12.072109    6356 topology_manager.go:215] "Topology Admit Handler" podUID="d5cac01c-f45b-4298-b381-448aede45862" podNamespace="default" podName="busybox-mount"
	Jul 29 00:57:12 functional-843000 kubelet[6356]: I0729 00:57:12.186510    6356 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/d5cac01c-f45b-4298-b381-448aede45862-test-volume\") pod \"busybox-mount\" (UID: \"d5cac01c-f45b-4298-b381-448aede45862\") " pod="default/busybox-mount"
	Jul 29 00:57:12 functional-843000 kubelet[6356]: I0729 00:57:12.186532    6356 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9lkd\" (UniqueName: \"kubernetes.io/projected/d5cac01c-f45b-4298-b381-448aede45862-kube-api-access-g9lkd\") pod \"busybox-mount\" (UID: \"d5cac01c-f45b-4298-b381-448aede45862\") " pod="default/busybox-mount"
	Jul 29 00:57:14 functional-843000 kubelet[6356]: I0729 00:57:14.799787    6356 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/d5cac01c-f45b-4298-b381-448aede45862-test-volume\") pod \"d5cac01c-f45b-4298-b381-448aede45862\" (UID: \"d5cac01c-f45b-4298-b381-448aede45862\") "
	Jul 29 00:57:14 functional-843000 kubelet[6356]: I0729 00:57:14.799812    6356 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9lkd\" (UniqueName: \"kubernetes.io/projected/d5cac01c-f45b-4298-b381-448aede45862-kube-api-access-g9lkd\") pod \"d5cac01c-f45b-4298-b381-448aede45862\" (UID: \"d5cac01c-f45b-4298-b381-448aede45862\") "
	Jul 29 00:57:14 functional-843000 kubelet[6356]: I0729 00:57:14.800011    6356 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5cac01c-f45b-4298-b381-448aede45862-test-volume" (OuterVolumeSpecName: "test-volume") pod "d5cac01c-f45b-4298-b381-448aede45862" (UID: "d5cac01c-f45b-4298-b381-448aede45862"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 29 00:57:14 functional-843000 kubelet[6356]: I0729 00:57:14.802573    6356 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5cac01c-f45b-4298-b381-448aede45862-kube-api-access-g9lkd" (OuterVolumeSpecName: "kube-api-access-g9lkd") pod "d5cac01c-f45b-4298-b381-448aede45862" (UID: "d5cac01c-f45b-4298-b381-448aede45862"). InnerVolumeSpecName "kube-api-access-g9lkd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 00:57:14 functional-843000 kubelet[6356]: I0729 00:57:14.900934    6356 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-g9lkd\" (UniqueName: \"kubernetes.io/projected/d5cac01c-f45b-4298-b381-448aede45862-kube-api-access-g9lkd\") on node \"functional-843000\" DevicePath \"\""
	Jul 29 00:57:14 functional-843000 kubelet[6356]: I0729 00:57:14.900943    6356 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/d5cac01c-f45b-4298-b381-448aede45862-test-volume\") on node \"functional-843000\" DevicePath \"\""
	Jul 29 00:57:15 functional-843000 kubelet[6356]: I0729 00:57:15.609981    6356 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="476a39f82d25deae0d61d12a8b70f8aa046bf67c20209ea7ca7d69f66a6a2069"
	Jul 29 00:57:17 functional-843000 kubelet[6356]: I0729 00:57:17.139471    6356 scope.go:117] "RemoveContainer" containerID="23d29488b27ab8c3c166a62b21654a4c067b8dd3d4e318140fd16b1ffe8fb818"
	Jul 29 00:57:17 functional-843000 kubelet[6356]: E0729 00:57:17.139597    6356 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-blgq2_default(7e1853a3-992c-4b5f-9084-626f6884bf4e)\"" pod="default/hello-node-connect-6f49f58cd5-blgq2" podUID="7e1853a3-992c-4b5f-9084-626f6884bf4e"
	Jul 29 00:57:18 functional-843000 kubelet[6356]: I0729 00:57:18.954994    6356 topology_manager.go:215] "Topology Admit Handler" podUID="c7d975c5-5469-49f0-bbe3-8c0439260e74" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-fl4d6"
	Jul 29 00:57:18 functional-843000 kubelet[6356]: E0729 00:57:18.955040    6356 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5cac01c-f45b-4298-b381-448aede45862" containerName="mount-munger"
	Jul 29 00:57:18 functional-843000 kubelet[6356]: I0729 00:57:18.955058    6356 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cac01c-f45b-4298-b381-448aede45862" containerName="mount-munger"
	Jul 29 00:57:18 functional-843000 kubelet[6356]: I0729 00:57:18.956234    6356 topology_manager.go:215] "Topology Admit Handler" podUID="bdb12c24-0511-4917-8f26-61202558fafd" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-mx2sj"
	Jul 29 00:57:19 functional-843000 kubelet[6356]: I0729 00:57:19.020112    6356 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c7d975c5-5469-49f0-bbe3-8c0439260e74-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-fl4d6\" (UID: \"c7d975c5-5469-49f0-bbe3-8c0439260e74\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-fl4d6"
	Jul 29 00:57:19 functional-843000 kubelet[6356]: I0729 00:57:19.020148    6356 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptmxk\" (UniqueName: \"kubernetes.io/projected/bdb12c24-0511-4917-8f26-61202558fafd-kube-api-access-ptmxk\") pod \"dashboard-metrics-scraper-b5fc48f67-mx2sj\" (UID: \"bdb12c24-0511-4917-8f26-61202558fafd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-mx2sj"
	Jul 29 00:57:19 functional-843000 kubelet[6356]: I0729 00:57:19.020159    6356 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bdb12c24-0511-4917-8f26-61202558fafd-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-mx2sj\" (UID: \"bdb12c24-0511-4917-8f26-61202558fafd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-mx2sj"
	Jul 29 00:57:19 functional-843000 kubelet[6356]: I0729 00:57:19.020170    6356 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gd44\" (UniqueName: \"kubernetes.io/projected/c7d975c5-5469-49f0-bbe3-8c0439260e74-kube-api-access-2gd44\") pod \"kubernetes-dashboard-779776cb65-fl4d6\" (UID: \"c7d975c5-5469-49f0-bbe3-8c0439260e74\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-fl4d6"
	Jul 29 00:57:22 functional-843000 kubelet[6356]: I0729 00:57:22.138779    6356 scope.go:117] "RemoveContainer" containerID="ad1d5b2e38e553040c828829675f6e669468a2d034f944194cbab54b774cadcc"
	Jul 29 00:57:22 functional-843000 kubelet[6356]: I0729 00:57:22.654242    6356 scope.go:117] "RemoveContainer" containerID="ad1d5b2e38e553040c828829675f6e669468a2d034f944194cbab54b774cadcc"
	Jul 29 00:57:22 functional-843000 kubelet[6356]: I0729 00:57:22.654393    6356 scope.go:117] "RemoveContainer" containerID="2a4419a48c853245d4165fc13e412818c890de4537e9aac535b3bbc245f0e124"
	Jul 29 00:57:22 functional-843000 kubelet[6356]: E0729 00:57:22.654474    6356 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-9kdn9_default(7effe9a5-ae68-4415-9032-a8acac4fd61f)\"" pod="default/hello-node-65f5d5cc78-9kdn9" podUID="7effe9a5-ae68-4415-9032-a8acac4fd61f"
	
	
	==> storage-provisioner [55a36efdce60] <==
	I0729 00:56:21.189636       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 00:56:21.193054       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 00:56:21.193067       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 00:56:38.575713       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 00:56:38.575848       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-843000_0ff3a5ab-e55b-43d3-94b8-f6ec5acf500d!
	I0729 00:56:38.575925       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5bf60715-e4fc-4b1f-a34a-e67d1e3ff904", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-843000_0ff3a5ab-e55b-43d3-94b8-f6ec5acf500d became leader
	I0729 00:56:38.676102       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-843000_0ff3a5ab-e55b-43d3-94b8-f6ec5acf500d!
	I0729 00:56:50.370914       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0729 00:56:50.371513       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"538663fb-48cb-4c33-bcb4-84cdee3dc286", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0729 00:56:50.371009       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    1be1598f-752b-4f14-8b2f-9f150ef6e2bf 316 0 2024-07-29 00:55:04 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-29 00:55:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-538663fb-48cb-4c33-bcb4-84cdee3dc286 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  538663fb-48cb-4c33-bcb4-84cdee3dc286 721 0 2024-07-29 00:56:50 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-29 00:56:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-29 00:56:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0729 00:56:50.371976       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-538663fb-48cb-4c33-bcb4-84cdee3dc286" provisioned
	I0729 00:56:50.372006       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0729 00:56:50.372014       1 volume_store.go:212] Trying to save persistentvolume "pvc-538663fb-48cb-4c33-bcb4-84cdee3dc286"
	I0729 00:56:50.378274       1 volume_store.go:219] persistentvolume "pvc-538663fb-48cb-4c33-bcb4-84cdee3dc286" saved
	I0729 00:56:50.378396       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"538663fb-48cb-4c33-bcb4-84cdee3dc286", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-538663fb-48cb-4c33-bcb4-84cdee3dc286
	
	
	==> storage-provisioner [984eec4a17e8] <==
	I0729 00:56:08.631138       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 00:56:08.631708       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-843000 -n functional-843000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-843000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-b5fc48f67-mx2sj kubernetes-dashboard-779776cb65-fl4d6
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-843000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-mx2sj kubernetes-dashboard-779776cb65-fl4d6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-843000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-mx2sj kubernetes-dashboard-779776cb65-fl4d6: exit status 1 (50.174709ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-843000/192.168.105.4
	Start Time:       Sun, 28 Jul 2024 17:57:12 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://5d026320c091355a43a7b96341a53339fa58cbff1b1ac6c8cbca84867f0bf46e
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 28 Jul 2024 17:57:13 -0700
	      Finished:     Sun, 28 Jul 2024 17:57:13 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g9lkd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-g9lkd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  12s   default-scheduler  Successfully assigned default/busybox-mount to functional-843000
	  Normal  Pulling    12s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     11s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.017s (1.017s including waiting). Image size: 3547125 bytes.
	  Normal  Created    11s   kubelet            Created container mount-munger
	  Normal  Started    11s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-b5fc48f67-mx2sj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-fl4d6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-843000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-mx2sj kubernetes-dashboard-779776cb65-fl4d6: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (36.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (312.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-297000 node stop m02 -v=7 --alsologtostderr: (12.196986083s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 status -v=7 --alsologtostderr
E0728 18:04:17.623210    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:04:56.772160    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 18:06:33.759832    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:07:01.465694    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-297000 status -v=7 --alsologtostderr: exit status 7 (3m45.050530042s)

                                                
                                                
-- stdout --
	ha-297000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-297000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-297000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-297000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:04:00.630850    3287 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:04:00.630999    3287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:04:00.631002    3287 out.go:304] Setting ErrFile to fd 2...
	I0728 18:04:00.631004    3287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:04:00.631131    3287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:04:00.631250    3287 out.go:298] Setting JSON to false
	I0728 18:04:00.631263    3287 mustload.go:65] Loading cluster: ha-297000
	I0728 18:04:00.631300    3287 notify.go:220] Checking for updates...
	I0728 18:04:00.631496    3287 config.go:182] Loaded profile config "ha-297000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:04:00.631502    3287 status.go:255] checking status of ha-297000 ...
	I0728 18:04:00.632244    3287 status.go:330] ha-297000 host status = "Running" (err=<nil>)
	I0728 18:04:00.632254    3287 host.go:66] Checking if "ha-297000" exists ...
	I0728 18:04:00.632356    3287 host.go:66] Checking if "ha-297000" exists ...
	I0728 18:04:00.632476    3287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 18:04:00.632483    3287 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000/id_rsa Username:docker}
	W0728 18:05:15.635392    3287 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0728 18:05:15.635480    3287 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0728 18:05:15.635491    3287 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0728 18:05:15.635495    3287 status.go:257] ha-297000 status: &{Name:ha-297000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0728 18:05:15.635509    3287 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0728 18:05:15.635513    3287 status.go:255] checking status of ha-297000-m02 ...
	I0728 18:05:15.635749    3287 status.go:330] ha-297000-m02 host status = "Stopped" (err=<nil>)
	I0728 18:05:15.635754    3287 status.go:343] host is not running, skipping remaining checks
	I0728 18:05:15.635756    3287 status.go:257] ha-297000-m02 status: &{Name:ha-297000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 18:05:15.635760    3287 status.go:255] checking status of ha-297000-m03 ...
	I0728 18:05:15.636372    3287 status.go:330] ha-297000-m03 host status = "Running" (err=<nil>)
	I0728 18:05:15.636381    3287 host.go:66] Checking if "ha-297000-m03" exists ...
	I0728 18:05:15.636495    3287 host.go:66] Checking if "ha-297000-m03" exists ...
	I0728 18:05:15.636634    3287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 18:05:15.636640    3287 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m03/id_rsa Username:docker}
	W0728 18:06:30.639003    3287 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0728 18:06:30.639049    3287 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0728 18:06:30.639057    3287 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0728 18:06:30.639061    3287 status.go:257] ha-297000-m03 status: &{Name:ha-297000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0728 18:06:30.639070    3287 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0728 18:06:30.639075    3287 status.go:255] checking status of ha-297000-m04 ...
	I0728 18:06:30.639749    3287 status.go:330] ha-297000-m04 host status = "Running" (err=<nil>)
	I0728 18:06:30.639755    3287 host.go:66] Checking if "ha-297000-m04" exists ...
	I0728 18:06:30.639857    3287 host.go:66] Checking if "ha-297000-m04" exists ...
	I0728 18:06:30.639968    3287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 18:06:30.639974    3287 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m04/id_rsa Username:docker}
	W0728 18:07:45.642975    3287 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0728 18:07:45.643153    3287 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0728 18:07:45.643189    3287 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0728 18:07:45.643210    3287 status.go:257] ha-297000-m04 status: &{Name:ha-297000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0728 18:07:45.643250    3287 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-297000 status -v=7 --alsologtostderr": ha-297000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-297000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-297000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-297000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-297000 status -v=7 --alsologtostderr": ha-297000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-297000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-297000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-297000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-297000 status -v=7 --alsologtostderr": ha-297000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-297000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-297000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-297000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-297000 -n ha-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-297000 -n ha-297000: exit status 3 (1m15.073774458s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 18:09:00.721223    3341 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0728 18:09:00.721253    3341 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-297000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (312.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0728 18:09:56.772693    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 18:11:19.841717    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.083782833s)
ha_test.go:413: expected profile "ha-297000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-297000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-297000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-297000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-297000 -n ha-297000
E0728 18:11:33.758975    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-297000 -n ha-297000: exit status 3 (1m15.040300584s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 18:12:45.844005    3448 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0728 18:12:45.844046    3448 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-297000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (305.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-297000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.124360167s)

                                                
                                                
-- stdout --
	* Starting "ha-297000-m02" control-plane node in "ha-297000" cluster
	* Restarting existing qemu2 VM for "ha-297000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-297000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:12:45.910585    3456 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:12:45.910922    3456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:12:45.910927    3456 out.go:304] Setting ErrFile to fd 2...
	I0728 18:12:45.910930    3456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:12:45.911085    3456 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:12:45.911370    3456 mustload.go:65] Loading cluster: ha-297000
	I0728 18:12:45.911657    3456 config.go:182] Loaded profile config "ha-297000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0728 18:12:45.911911    3456 host.go:58] "ha-297000-m02" host status: Stopped
	I0728 18:12:45.916510    3456 out.go:177] * Starting "ha-297000-m02" control-plane node in "ha-297000" cluster
	I0728 18:12:45.919482    3456 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:12:45.919500    3456 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:12:45.919518    3456 cache.go:56] Caching tarball of preloaded images
	I0728 18:12:45.919651    3456 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:12:45.919672    3456 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:12:45.919770    3456 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/ha-297000/config.json ...
	I0728 18:12:45.920278    3456 start.go:360] acquireMachinesLock for ha-297000-m02: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:12:45.920334    3456 start.go:364] duration metric: took 38.584µs to acquireMachinesLock for "ha-297000-m02"
	I0728 18:12:45.920343    3456 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:12:45.920349    3456 fix.go:54] fixHost starting: m02
	I0728 18:12:45.920526    3456 fix.go:112] recreateIfNeeded on ha-297000-m02: state=Stopped err=<nil>
	W0728 18:12:45.920533    3456 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:12:45.925342    3456 out.go:177] * Restarting existing qemu2 VM for "ha-297000-m02" ...
	I0728 18:12:45.929443    3456 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:12:45.929495    3456 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:64:00:5e:d8:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m02/disk.qcow2
	I0728 18:12:45.932220    3456 main.go:141] libmachine: STDOUT: 
	I0728 18:12:45.932241    3456 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:12:45.932274    3456 fix.go:56] duration metric: took 11.924958ms for fixHost
	I0728 18:12:45.932279    3456 start.go:83] releasing machines lock for "ha-297000-m02", held for 11.939166ms
	W0728 18:12:45.932286    3456 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:12:45.932323    3456 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:12:45.932328    3456 start.go:729] Will try again in 5 seconds ...
	I0728 18:12:50.934558    3456 start.go:360] acquireMachinesLock for ha-297000-m02: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:12:50.934971    3456 start.go:364] duration metric: took 328.875µs to acquireMachinesLock for "ha-297000-m02"
	I0728 18:12:50.935096    3456 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:12:50.935112    3456 fix.go:54] fixHost starting: m02
	I0728 18:12:50.935724    3456 fix.go:112] recreateIfNeeded on ha-297000-m02: state=Stopped err=<nil>
	W0728 18:12:50.935740    3456 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:12:50.940147    3456 out.go:177] * Restarting existing qemu2 VM for "ha-297000-m02" ...
	I0728 18:12:50.944092    3456 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:12:50.944265    3456 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:64:00:5e:d8:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m02/disk.qcow2
	I0728 18:12:50.950917    3456 main.go:141] libmachine: STDOUT: 
	I0728 18:12:50.950971    3456 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:12:50.951026    3456 fix.go:56] duration metric: took 15.914958ms for fixHost
	I0728 18:12:50.951075    3456 start.go:83] releasing machines lock for "ha-297000-m02", held for 16.053375ms
	W0728 18:12:50.951244    3456 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-297000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-297000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:12:50.955153    3456 out.go:177] 
	W0728 18:12:50.959156    3456 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:12:50.959181    3456 out.go:239] * 
	* 
	W0728 18:12:50.963946    3456 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:12:50.968133    3456 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0728 18:12:45.910585    3456 out.go:291] Setting OutFile to fd 1 ...
I0728 18:12:45.910922    3456 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 18:12:45.910927    3456 out.go:304] Setting ErrFile to fd 2...
I0728 18:12:45.910930    3456 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 18:12:45.911085    3456 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
I0728 18:12:45.911370    3456 mustload.go:65] Loading cluster: ha-297000
I0728 18:12:45.911657    3456 config.go:182] Loaded profile config "ha-297000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0728 18:12:45.911911    3456 host.go:58] "ha-297000-m02" host status: Stopped
I0728 18:12:45.916510    3456 out.go:177] * Starting "ha-297000-m02" control-plane node in "ha-297000" cluster
I0728 18:12:45.919482    3456 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0728 18:12:45.919500    3456 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0728 18:12:45.919518    3456 cache.go:56] Caching tarball of preloaded images
I0728 18:12:45.919651    3456 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0728 18:12:45.919672    3456 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0728 18:12:45.919770    3456 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/ha-297000/config.json ...
I0728 18:12:45.920278    3456 start.go:360] acquireMachinesLock for ha-297000-m02: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0728 18:12:45.920334    3456 start.go:364] duration metric: took 38.584µs to acquireMachinesLock for "ha-297000-m02"
I0728 18:12:45.920343    3456 start.go:96] Skipping create...Using existing machine configuration
I0728 18:12:45.920349    3456 fix.go:54] fixHost starting: m02
I0728 18:12:45.920526    3456 fix.go:112] recreateIfNeeded on ha-297000-m02: state=Stopped err=<nil>
W0728 18:12:45.920533    3456 fix.go:138] unexpected machine state, will restart: <nil>
I0728 18:12:45.925342    3456 out.go:177] * Restarting existing qemu2 VM for "ha-297000-m02" ...
I0728 18:12:45.929443    3456 qemu.go:418] Using hvf for hardware acceleration
I0728 18:12:45.929495    3456 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:64:00:5e:d8:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m02/disk.qcow2
I0728 18:12:45.932220    3456 main.go:141] libmachine: STDOUT: 
I0728 18:12:45.932241    3456 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0728 18:12:45.932274    3456 fix.go:56] duration metric: took 11.924958ms for fixHost
I0728 18:12:45.932279    3456 start.go:83] releasing machines lock for "ha-297000-m02", held for 11.939166ms
W0728 18:12:45.932286    3456 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0728 18:12:45.932323    3456 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0728 18:12:45.932328    3456 start.go:729] Will try again in 5 seconds ...
I0728 18:12:50.934558    3456 start.go:360] acquireMachinesLock for ha-297000-m02: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0728 18:12:50.934971    3456 start.go:364] duration metric: took 328.875µs to acquireMachinesLock for "ha-297000-m02"
I0728 18:12:50.935096    3456 start.go:96] Skipping create...Using existing machine configuration
I0728 18:12:50.935112    3456 fix.go:54] fixHost starting: m02
I0728 18:12:50.935724    3456 fix.go:112] recreateIfNeeded on ha-297000-m02: state=Stopped err=<nil>
W0728 18:12:50.935740    3456 fix.go:138] unexpected machine state, will restart: <nil>
I0728 18:12:50.940147    3456 out.go:177] * Restarting existing qemu2 VM for "ha-297000-m02" ...
I0728 18:12:50.944092    3456 qemu.go:418] Using hvf for hardware acceleration
I0728 18:12:50.944265    3456 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:64:00:5e:d8:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m02/disk.qcow2
I0728 18:12:50.950917    3456 main.go:141] libmachine: STDOUT: 
I0728 18:12:50.950971    3456 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0728 18:12:50.951026    3456 fix.go:56] duration metric: took 15.914958ms for fixHost
I0728 18:12:50.951075    3456 start.go:83] releasing machines lock for "ha-297000-m02", held for 16.053375ms
W0728 18:12:50.951244    3456 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-297000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-297000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0728 18:12:50.955153    3456 out.go:177] 
W0728 18:12:50.959156    3456 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0728 18:12:50.959181    3456 out.go:239] * 
* 
W0728 18:12:50.963946    3456 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0728 18:12:50.968133    3456 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-297000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 status -v=7 --alsologtostderr
E0728 18:14:56.773563    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 18:16:33.760748    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-297000 status -v=7 --alsologtostderr: exit status 7 (3m45.061403542s)

                                                
                                                
-- stdout --
	ha-297000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-297000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-297000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-297000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:12:51.024189    3460 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:12:51.024361    3460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:12:51.024365    3460 out.go:304] Setting ErrFile to fd 2...
	I0728 18:12:51.024368    3460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:12:51.024533    3460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:12:51.024683    3460 out.go:298] Setting JSON to false
	I0728 18:12:51.024701    3460 mustload.go:65] Loading cluster: ha-297000
	I0728 18:12:51.024742    3460 notify.go:220] Checking for updates...
	I0728 18:12:51.024953    3460 config.go:182] Loaded profile config "ha-297000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:12:51.024960    3460 status.go:255] checking status of ha-297000 ...
	I0728 18:12:51.025775    3460 status.go:330] ha-297000 host status = "Running" (err=<nil>)
	I0728 18:12:51.025785    3460 host.go:66] Checking if "ha-297000" exists ...
	I0728 18:12:51.025912    3460 host.go:66] Checking if "ha-297000" exists ...
	I0728 18:12:51.026036    3460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 18:12:51.026045    3460 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000/id_rsa Username:docker}
	W0728 18:14:06.027877    3460 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0728 18:14:06.027985    3460 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0728 18:14:06.027996    3460 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0728 18:14:06.028000    3460 status.go:257] ha-297000 status: &{Name:ha-297000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0728 18:14:06.028013    3460 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0728 18:14:06.028018    3460 status.go:255] checking status of ha-297000-m02 ...
	I0728 18:14:06.028322    3460 status.go:330] ha-297000-m02 host status = "Stopped" (err=<nil>)
	I0728 18:14:06.028327    3460 status.go:343] host is not running, skipping remaining checks
	I0728 18:14:06.028329    3460 status.go:257] ha-297000-m02 status: &{Name:ha-297000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 18:14:06.028337    3460 status.go:255] checking status of ha-297000-m03 ...
	I0728 18:14:06.028959    3460 status.go:330] ha-297000-m03 host status = "Running" (err=<nil>)
	I0728 18:14:06.028969    3460 host.go:66] Checking if "ha-297000-m03" exists ...
	I0728 18:14:06.029097    3460 host.go:66] Checking if "ha-297000-m03" exists ...
	I0728 18:14:06.029220    3460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 18:14:06.029225    3460 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m03/id_rsa Username:docker}
	W0728 18:15:21.030951    3460 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0728 18:15:21.031046    3460 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0728 18:15:21.031064    3460 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0728 18:15:21.031074    3460 status.go:257] ha-297000-m03 status: &{Name:ha-297000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0728 18:15:21.031092    3460 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0728 18:15:21.031102    3460 status.go:255] checking status of ha-297000-m04 ...
	I0728 18:15:21.032487    3460 status.go:330] ha-297000-m04 host status = "Running" (err=<nil>)
	I0728 18:15:21.032501    3460 host.go:66] Checking if "ha-297000-m04" exists ...
	I0728 18:15:21.032744    3460 host.go:66] Checking if "ha-297000-m04" exists ...
	I0728 18:15:21.032995    3460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 18:15:21.033010    3460 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m04/id_rsa Username:docker}
	W0728 18:16:36.035255    3460 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0728 18:16:36.035297    3460 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0728 18:16:36.035307    3460 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0728 18:16:36.035311    3460 status.go:257] ha-297000-m04 status: &{Name:ha-297000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0728 18:16:36.035321    3460 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-297000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-297000 -n ha-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-297000 -n ha-297000: exit status 3 (1m15.039369541s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 18:17:51.071768    3512 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0728 18:17:51.071779    3512 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-297000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (305.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-297000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-297000 -v=7 --alsologtostderr
E0728 18:21:33.754450    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:24:56.766839    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-297000 -v=7 --alsologtostderr: (5m27.168452834s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-297000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-297000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.234247625s)

                                                
                                                
-- stdout --
	* [ha-297000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-297000" primary control-plane node in "ha-297000" cluster
	* Restarting existing qemu2 VM for "ha-297000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-297000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:25:48.402939    3658 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:25:48.403119    3658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:25:48.403124    3658 out.go:304] Setting ErrFile to fd 2...
	I0728 18:25:48.403127    3658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:25:48.403317    3658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:25:48.404580    3658 out.go:298] Setting JSON to false
	I0728 18:25:48.425138    3658 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3319,"bootTime":1722213029,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:25:48.425203    3658 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:25:48.430333    3658 out.go:177] * [ha-297000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:25:48.438374    3658 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:25:48.438411    3658 notify.go:220] Checking for updates...
	I0728 18:25:48.446303    3658 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:25:48.453337    3658 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:25:48.456310    3658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:25:48.459370    3658 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:25:48.462233    3658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:25:48.465677    3658 config.go:182] Loaded profile config "ha-297000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:25:48.465741    3658 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:25:48.470300    3658 out.go:177] * Using the qemu2 driver based on existing profile
	I0728 18:25:48.477313    3658 start.go:297] selected driver: qemu2
	I0728 18:25:48.477319    3658 start.go:901] validating driver "qemu2" against &{Name:ha-297000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-297000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:25:48.477399    3658 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:25:48.480385    3658 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:25:48.480454    3658 cni.go:84] Creating CNI manager for ""
	I0728 18:25:48.480462    3658 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0728 18:25:48.480518    3658 start.go:340] cluster config:
	{Name:ha-297000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-297000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:25:48.484881    3658 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:25:48.493252    3658 out.go:177] * Starting "ha-297000" primary control-plane node in "ha-297000" cluster
	I0728 18:25:48.497351    3658 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:25:48.497370    3658 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:25:48.497382    3658 cache.go:56] Caching tarball of preloaded images
	I0728 18:25:48.497444    3658 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:25:48.497450    3658 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:25:48.497529    3658 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/ha-297000/config.json ...
	I0728 18:25:48.498007    3658 start.go:360] acquireMachinesLock for ha-297000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:25:48.498048    3658 start.go:364] duration metric: took 34.375µs to acquireMachinesLock for "ha-297000"
	I0728 18:25:48.498058    3658 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:25:48.498064    3658 fix.go:54] fixHost starting: 
	I0728 18:25:48.498182    3658 fix.go:112] recreateIfNeeded on ha-297000: state=Stopped err=<nil>
	W0728 18:25:48.498190    3658 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:25:48.502361    3658 out.go:177] * Restarting existing qemu2 VM for "ha-297000" ...
	I0728 18:25:48.509334    3658 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:25:48.509373    3658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:0e:14:18:f1:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000/disk.qcow2
	I0728 18:25:48.511586    3658 main.go:141] libmachine: STDOUT: 
	I0728 18:25:48.511612    3658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:25:48.511641    3658 fix.go:56] duration metric: took 13.577375ms for fixHost
	I0728 18:25:48.511645    3658 start.go:83] releasing machines lock for "ha-297000", held for 13.592375ms
	W0728 18:25:48.511651    3658 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:25:48.511677    3658 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:25:48.511682    3658 start.go:729] Will try again in 5 seconds ...
	I0728 18:25:53.513863    3658 start.go:360] acquireMachinesLock for ha-297000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:25:53.514341    3658 start.go:364] duration metric: took 357.958µs to acquireMachinesLock for "ha-297000"
	I0728 18:25:53.514519    3658 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:25:53.514535    3658 fix.go:54] fixHost starting: 
	I0728 18:25:53.515342    3658 fix.go:112] recreateIfNeeded on ha-297000: state=Stopped err=<nil>
	W0728 18:25:53.515369    3658 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:25:53.523590    3658 out.go:177] * Restarting existing qemu2 VM for "ha-297000" ...
	I0728 18:25:53.527729    3658 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:25:53.527974    3658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:0e:14:18:f1:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000/disk.qcow2
	I0728 18:25:53.537029    3658 main.go:141] libmachine: STDOUT: 
	I0728 18:25:53.537083    3658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:25:53.537144    3658 fix.go:56] duration metric: took 22.609167ms for fixHost
	I0728 18:25:53.537161    3658 start.go:83] releasing machines lock for "ha-297000", held for 22.738292ms
	W0728 18:25:53.537310    3658 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-297000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-297000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:25:53.544732    3658 out.go:177] 
	W0728 18:25:53.548808    3658 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:25:53.548829    3658 out.go:239] * 
	* 
	W0728 18:25:53.551441    3658 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:25:53.559781    3658 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-297000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-297000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-297000 -n ha-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-297000 -n ha-297000: exit status 7 (32.459875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-297000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-297000 node delete m03 -v=7 --alsologtostderr: exit status 83 (38.59575ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-297000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-297000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:25:53.699871    3670 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:25:53.700119    3670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:25:53.700125    3670 out.go:304] Setting ErrFile to fd 2...
	I0728 18:25:53.700127    3670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:25:53.700257    3670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:25:53.700474    3670 mustload.go:65] Loading cluster: ha-297000
	I0728 18:25:53.700688    3670 config.go:182] Loaded profile config "ha-297000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0728 18:25:53.701003    3670 out.go:239] ! The control-plane node ha-297000 host is not running (will try others): state=Stopped
	! The control-plane node ha-297000 host is not running (will try others): state=Stopped
	W0728 18:25:53.701121    3670 out.go:239] ! The control-plane node ha-297000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-297000-m02 host is not running (will try others): state=Stopped
	I0728 18:25:53.704718    3670 out.go:177] * The control-plane node ha-297000-m03 host is not running: state=Stopped
	I0728 18:25:53.707718    3670 out.go:177]   To start a cluster, run: "minikube start -p ha-297000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-297000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-297000 status -v=7 --alsologtostderr: exit status 7 (29.492959ms)

                                                
                                                
-- stdout --
	ha-297000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-297000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-297000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-297000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:25:53.738946    3672 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:25:53.739096    3672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:25:53.739103    3672 out.go:304] Setting ErrFile to fd 2...
	I0728 18:25:53.739106    3672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:25:53.739232    3672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:25:53.739359    3672 out.go:298] Setting JSON to false
	I0728 18:25:53.739368    3672 mustload.go:65] Loading cluster: ha-297000
	I0728 18:25:53.739424    3672 notify.go:220] Checking for updates...
	I0728 18:25:53.739614    3672 config.go:182] Loaded profile config "ha-297000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:25:53.739620    3672 status.go:255] checking status of ha-297000 ...
	I0728 18:25:53.739829    3672 status.go:330] ha-297000 host status = "Stopped" (err=<nil>)
	I0728 18:25:53.739833    3672 status.go:343] host is not running, skipping remaining checks
	I0728 18:25:53.739835    3672 status.go:257] ha-297000 status: &{Name:ha-297000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 18:25:53.739844    3672 status.go:255] checking status of ha-297000-m02 ...
	I0728 18:25:53.739933    3672 status.go:330] ha-297000-m02 host status = "Stopped" (err=<nil>)
	I0728 18:25:53.739935    3672 status.go:343] host is not running, skipping remaining checks
	I0728 18:25:53.739937    3672 status.go:257] ha-297000-m02 status: &{Name:ha-297000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 18:25:53.739941    3672 status.go:255] checking status of ha-297000-m03 ...
	I0728 18:25:53.740026    3672 status.go:330] ha-297000-m03 host status = "Stopped" (err=<nil>)
	I0728 18:25:53.740029    3672 status.go:343] host is not running, skipping remaining checks
	I0728 18:25:53.740030    3672 status.go:257] ha-297000-m03 status: &{Name:ha-297000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 18:25:53.740035    3672 status.go:255] checking status of ha-297000-m04 ...
	I0728 18:25:53.740127    3672 status.go:330] ha-297000-m04 host status = "Stopped" (err=<nil>)
	I0728 18:25:53.740130    3672 status.go:343] host is not running, skipping remaining checks
	I0728 18:25:53.740131    3672 status.go:257] ha-297000-m04 status: &{Name:ha-297000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-297000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-297000 -n ha-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-297000 -n ha-297000: exit status 7 (29.106333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-297000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-297000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-297000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-297000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-297000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-297000 -n ha-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-297000 -n ha-297000: exit status 7 (29.080167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-297000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (93.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 stop -v=7 --alsologtostderr
E0728 18:26:33.754216    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-297000 stop -v=7 --alsologtostderr: signal: killed (1m33.534570542s)

                                                
                                                
-- stdout --
	* Stopping node "ha-297000-m04"  ...
	* Stopping node "ha-297000-m03"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:25:53.876952    3681 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:25:53.877114    3681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:25:53.877117    3681 out.go:304] Setting ErrFile to fd 2...
	I0728 18:25:53.877119    3681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:25:53.877248    3681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:25:53.877462    3681 out.go:298] Setting JSON to false
	I0728 18:25:53.877555    3681 mustload.go:65] Loading cluster: ha-297000
	I0728 18:25:53.877764    3681 config.go:182] Loaded profile config "ha-297000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:25:53.877818    3681 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/ha-297000/config.json ...
	I0728 18:25:53.878044    3681 mustload.go:65] Loading cluster: ha-297000
	I0728 18:25:53.878120    3681 config.go:182] Loaded profile config "ha-297000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:25:53.878138    3681 stop.go:39] StopHost: ha-297000-m04
	I0728 18:25:53.882666    3681 out.go:177] * Stopping node "ha-297000-m04"  ...
	I0728 18:25:53.889640    3681 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0728 18:25:53.889668    3681 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0728 18:25:53.889675    3681 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m04/id_rsa Username:docker}
	W0728 18:27:08.892410    3681 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0728 18:27:08.892731    3681 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0728 18:27:08.892882    3681 main.go:141] libmachine: Stopping "ha-297000-m04"...
	I0728 18:27:08.893040    3681 stop.go:66] stop err: Machine "ha-297000-m04" is already stopped.
	I0728 18:27:08.893085    3681 stop.go:69] host is already stopped
	I0728 18:27:08.893114    3681 stop.go:39] StopHost: ha-297000-m03
	I0728 18:27:08.898472    3681 out.go:177] * Stopping node "ha-297000-m03"  ...
	I0728 18:27:08.906294    3681 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0728 18:27:08.906475    3681 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0728 18:27:08.906506    3681 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/ha-297000-m03/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-darwin-arm64 -p ha-297000 stop -v=7 --alsologtostderr": signal: killed
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-297000 status -v=7 --alsologtostderr: context deadline exceeded (2.833µs)
ha_test.go:540: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-297000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-297000 -n ha-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-297000 -n ha-297000: exit status 7 (72.397042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-297000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (93.61s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.35s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-455000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-455000 --driver=qemu2 : exit status 80 (10.277924459s)

                                                
                                                
-- stdout --
	* [image-455000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-455000" primary control-plane node in "image-455000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-455000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-455000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-455000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-455000 -n image-455000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-455000 -n image-455000: exit status 7 (66.835875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-455000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.35s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-847000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-847000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.776907042s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9643d593-32b0-4767-862f-4d70c38595b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-847000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"657689a4-b238-4539-8cd4-56b99f597c47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"086568db-ee36-4c4b-9136-56d6cab86aa4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig"}}
	{"specversion":"1.0","id":"1908f075-0e01-42a6-b646-b87ef6376189","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"9f73185e-5ee6-4949-8e12-909387f93179","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0fb73c04-5c96-4be0-86cb-3d95a044f39f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube"}}
	{"specversion":"1.0","id":"7c6b57e5-a2bf-4345-a28f-79bb2fb4b425","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e047faac-7c09-401d-81b7-eefa6690242e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e912d1ac-0f03-4cff-aadb-5ccf174e3597","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"6ff6ffa9-e539-437c-bd1c-3fffd35253b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-847000\" primary control-plane node in \"json-output-847000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"60c4ff68-12da-4f36-acdc-940069273755","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"93b27d73-e9c3-4f4c-95c9-875379f751f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-847000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"cdbcc0ad-1a27-415d-8c27-1fceba467612","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"adcbd238-6d6e-4482-b26a-b5d2ded73a84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"04ed6421-b1f5-455b-8ebb-e723f374d6ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-847000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"7803a888-777f-4ff2-adcc-194585a434bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"fa300779-040f-43fe-b425-29f57c60c110","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-847000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-847000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-847000 --output=json --user=testUser: exit status 83 (50.781375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d5ab8880-a5ce-42fc-bb5f-7da1955b09b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-847000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"152e92ce-c8bb-4457-bf83-e089cf437ff1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-847000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-847000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.05s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-847000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-847000 --output=json --user=testUser: exit status 83 (45.322708ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-847000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-847000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-847000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-847000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-125000 --driver=qemu2 
E0728 18:27:59.837854    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-125000 --driver=qemu2 : exit status 80 (9.815928958s)

                                                
                                                
-- stdout --
	* [first-125000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-125000" primary control-plane node in "first-125000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-125000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-125000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-125000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-28 18:28:01.722669 -0700 PDT m=+2532.279188543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-127000 -n second-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-127000 -n second-127000: exit status 85 (76.874042ms)

                                                
                                                
-- stdout --
	* Profile "second-127000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-127000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-127000" host is not running, skipping log retrieval (state="* Profile \"second-127000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-127000\"")
helpers_test.go:175: Cleaning up "second-127000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-127000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-28 18:28:01.90575 -0700 PDT m=+2532.462269293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-125000 -n first-125000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-125000 -n first-125000: exit status 7 (29.114792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-125000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-125000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-125000
--- FAIL: TestMinikubeProfile (10.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-568000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-568000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.946947125s)

                                                
                                                
-- stdout --
	* [mount-start-1-568000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-568000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-568000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-568000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-568000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-568000 -n mount-start-1-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-568000 -n mount-start-1-568000: exit status 7 (66.234667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-429000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-429000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.84715875s)

                                                
                                                
-- stdout --
	* [multinode-429000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-429000" primary control-plane node in "multinode-429000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-429000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:28:12.237637    3843 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:28:12.237764    3843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:28:12.237768    3843 out.go:304] Setting ErrFile to fd 2...
	I0728 18:28:12.237770    3843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:28:12.237906    3843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:28:12.238901    3843 out.go:298] Setting JSON to false
	I0728 18:28:12.254649    3843 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3463,"bootTime":1722213029,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:28:12.254709    3843 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:28:12.259762    3843 out.go:177] * [multinode-429000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:28:12.266619    3843 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:28:12.266673    3843 notify.go:220] Checking for updates...
	I0728 18:28:12.274654    3843 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:28:12.277661    3843 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:28:12.280676    3843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:28:12.283651    3843 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:28:12.286630    3843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:28:12.289778    3843 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:28:12.292518    3843 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:28:12.299601    3843 start.go:297] selected driver: qemu2
	I0728 18:28:12.299606    3843 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:28:12.299612    3843 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:28:12.301729    3843 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:28:12.302991    3843 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:28:12.305732    3843 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:28:12.305765    3843 cni.go:84] Creating CNI manager for ""
	I0728 18:28:12.305772    3843 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0728 18:28:12.305779    3843 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0728 18:28:12.305805    3843 start.go:340] cluster config:
	{Name:multinode-429000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:28:12.309524    3843 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:28:12.317562    3843 out.go:177] * Starting "multinode-429000" primary control-plane node in "multinode-429000" cluster
	I0728 18:28:12.320628    3843 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:28:12.320642    3843 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:28:12.320653    3843 cache.go:56] Caching tarball of preloaded images
	I0728 18:28:12.320707    3843 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:28:12.320712    3843 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:28:12.320906    3843 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/multinode-429000/config.json ...
	I0728 18:28:12.320917    3843 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/multinode-429000/config.json: {Name:mk18ec8b4c0aa72c5022b360d35ed7c2ec8de244 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:28:12.321133    3843 start.go:360] acquireMachinesLock for multinode-429000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:28:12.321167    3843 start.go:364] duration metric: took 28.542µs to acquireMachinesLock for "multinode-429000"
	I0728 18:28:12.321181    3843 start.go:93] Provisioning new machine with config: &{Name:multinode-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:28:12.321212    3843 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:28:12.329623    3843 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:28:12.347301    3843 start.go:159] libmachine.API.Create for "multinode-429000" (driver="qemu2")
	I0728 18:28:12.347328    3843 client.go:168] LocalClient.Create starting
	I0728 18:28:12.347388    3843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:28:12.347419    3843 main.go:141] libmachine: Decoding PEM data...
	I0728 18:28:12.347429    3843 main.go:141] libmachine: Parsing certificate...
	I0728 18:28:12.347464    3843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:28:12.347488    3843 main.go:141] libmachine: Decoding PEM data...
	I0728 18:28:12.347493    3843 main.go:141] libmachine: Parsing certificate...
	I0728 18:28:12.347892    3843 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:28:12.493116    3843 main.go:141] libmachine: Creating SSH key...
	I0728 18:28:12.607154    3843 main.go:141] libmachine: Creating Disk image...
	I0728 18:28:12.607160    3843 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:28:12.607359    3843 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/disk.qcow2
	I0728 18:28:12.616384    3843 main.go:141] libmachine: STDOUT: 
	I0728 18:28:12.616401    3843 main.go:141] libmachine: STDERR: 
	I0728 18:28:12.616442    3843 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/disk.qcow2 +20000M
	I0728 18:28:12.624199    3843 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:28:12.624217    3843 main.go:141] libmachine: STDERR: 
	I0728 18:28:12.624232    3843 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/disk.qcow2
	I0728 18:28:12.624236    3843 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:28:12.624247    3843 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:28:12.624274    3843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:fb:ea:3f:9e:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/disk.qcow2
	I0728 18:28:12.625896    3843 main.go:141] libmachine: STDOUT: 
	I0728 18:28:12.625909    3843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:28:12.625929    3843 client.go:171] duration metric: took 278.597083ms to LocalClient.Create
	I0728 18:28:14.628109    3843 start.go:128] duration metric: took 2.306875209s to createHost
	I0728 18:28:14.628189    3843 start.go:83] releasing machines lock for "multinode-429000", held for 2.306997584s
	W0728 18:28:14.628250    3843 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:28:14.639337    3843 out.go:177] * Deleting "multinode-429000" in qemu2 ...
	W0728 18:28:14.665924    3843 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:28:14.665954    3843 start.go:729] Will try again in 5 seconds ...
	I0728 18:28:19.668121    3843 start.go:360] acquireMachinesLock for multinode-429000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:28:19.668596    3843 start.go:364] duration metric: took 356.417µs to acquireMachinesLock for "multinode-429000"
	I0728 18:28:19.668722    3843 start.go:93] Provisioning new machine with config: &{Name:multinode-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:28:19.668989    3843 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:28:19.684416    3843 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:28:19.734408    3843 start.go:159] libmachine.API.Create for "multinode-429000" (driver="qemu2")
	I0728 18:28:19.734451    3843 client.go:168] LocalClient.Create starting
	I0728 18:28:19.734557    3843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:28:19.734624    3843 main.go:141] libmachine: Decoding PEM data...
	I0728 18:28:19.734642    3843 main.go:141] libmachine: Parsing certificate...
	I0728 18:28:19.734705    3843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:28:19.734749    3843 main.go:141] libmachine: Decoding PEM data...
	I0728 18:28:19.734761    3843 main.go:141] libmachine: Parsing certificate...
	I0728 18:28:19.735405    3843 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:28:19.890280    3843 main.go:141] libmachine: Creating SSH key...
	I0728 18:28:19.994435    3843 main.go:141] libmachine: Creating Disk image...
	I0728 18:28:19.994440    3843 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:28:19.994637    3843 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/disk.qcow2
	I0728 18:28:20.003609    3843 main.go:141] libmachine: STDOUT: 
	I0728 18:28:20.003627    3843 main.go:141] libmachine: STDERR: 
	I0728 18:28:20.003682    3843 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/disk.qcow2 +20000M
	I0728 18:28:20.011465    3843 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:28:20.011484    3843 main.go:141] libmachine: STDERR: 
	I0728 18:28:20.011508    3843 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/disk.qcow2
	I0728 18:28:20.011512    3843 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:28:20.011521    3843 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:28:20.011549    3843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:82:f8:e1:bc:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/disk.qcow2
	I0728 18:28:20.013150    3843 main.go:141] libmachine: STDOUT: 
	I0728 18:28:20.013165    3843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:28:20.013177    3843 client.go:171] duration metric: took 278.722292ms to LocalClient.Create
	I0728 18:28:22.015353    3843 start.go:128] duration metric: took 2.346317417s to createHost
	I0728 18:28:22.015428    3843 start.go:83] releasing machines lock for "multinode-429000", held for 2.346805208s
	W0728 18:28:22.015856    3843 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-429000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-429000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:28:22.028573    3843 out.go:177] 
	W0728 18:28:22.032658    3843 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:28:22.032686    3843 out.go:239] * 
	* 
	W0728 18:28:22.035122    3843 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:28:22.042568    3843 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-429000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000: exit status 7 (65.966125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-429000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (112.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (125.509ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-429000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- rollout status deployment/busybox: exit status 1 (57.272333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.687042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.550584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.763083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.140125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.320833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.550458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.183292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.242916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.63975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.671042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0728 18:29:56.766742    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.37475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.074417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.26875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.170125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.216666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000: exit status 7 (29.326583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-429000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (112.72s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-429000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.458583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000: exit status 7 (29.796292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-429000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-429000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-429000 -v 3 --alsologtostderr: exit status 83 (42.777459ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-429000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-429000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:30:14.959845    4202 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:30:14.960021    4202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:14.960024    4202 out.go:304] Setting ErrFile to fd 2...
	I0728 18:30:14.960027    4202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:14.960149    4202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:30:14.960397    4202 mustload.go:65] Loading cluster: multinode-429000
	I0728 18:30:14.960578    4202 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:30:14.965790    4202 out.go:177] * The control-plane node multinode-429000 host is not running: state=Stopped
	I0728 18:30:14.970708    4202 out.go:177]   To start a cluster, run: "minikube start -p multinode-429000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-429000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000: exit status 7 (29.59575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-429000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-429000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-429000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.377542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-429000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-429000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-429000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000: exit status 7 (29.377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-429000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-429000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-429000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-429000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-429000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000: exit status 7 (29.425125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-429000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 status --output json --alsologtostderr: exit status 7 (29.438375ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-429000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:30:15.166209    4214 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:30:15.166361    4214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:15.166364    4214 out.go:304] Setting ErrFile to fd 2...
	I0728 18:30:15.166366    4214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:15.166496    4214 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:30:15.166623    4214 out.go:298] Setting JSON to true
	I0728 18:30:15.166632    4214 mustload.go:65] Loading cluster: multinode-429000
	I0728 18:30:15.166687    4214 notify.go:220] Checking for updates...
	I0728 18:30:15.166839    4214 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:30:15.166845    4214 status.go:255] checking status of multinode-429000 ...
	I0728 18:30:15.167047    4214 status.go:330] multinode-429000 host status = "Stopped" (err=<nil>)
	I0728 18:30:15.167050    4214 status.go:343] host is not running, skipping remaining checks
	I0728 18:30:15.167052    4214 status.go:257] multinode-429000 status: &{Name:multinode-429000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-429000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000: exit status 7 (28.992ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-429000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 node stop m03: exit status 85 (47.092416ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-429000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 status: exit status 7 (29.137542ms)

                                                
                                                
-- stdout --
	multinode-429000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 status --alsologtostderr: exit status 7 (29.714583ms)

                                                
                                                
-- stdout --
	multinode-429000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:30:15.302002    4222 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:30:15.302123    4222 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:15.302126    4222 out.go:304] Setting ErrFile to fd 2...
	I0728 18:30:15.302129    4222 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:15.302258    4222 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:30:15.302369    4222 out.go:298] Setting JSON to false
	I0728 18:30:15.302380    4222 mustload.go:65] Loading cluster: multinode-429000
	I0728 18:30:15.302430    4222 notify.go:220] Checking for updates...
	I0728 18:30:15.302557    4222 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:30:15.302563    4222 status.go:255] checking status of multinode-429000 ...
	I0728 18:30:15.302770    4222 status.go:330] multinode-429000 host status = "Stopped" (err=<nil>)
	I0728 18:30:15.302774    4222 status.go:343] host is not running, skipping remaining checks
	I0728 18:30:15.302776    4222 status.go:257] multinode-429000 status: &{Name:multinode-429000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-429000 status --alsologtostderr": multinode-429000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000: exit status 7 (29.541917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-429000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.095958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:30:15.360776    4226 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:30:15.361015    4226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:15.361018    4226 out.go:304] Setting ErrFile to fd 2...
	I0728 18:30:15.361021    4226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:15.361141    4226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:30:15.361381    4226 mustload.go:65] Loading cluster: multinode-429000
	I0728 18:30:15.361575    4226 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:30:15.365756    4226 out.go:177] 
	W0728 18:30:15.368762    4226 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0728 18:30:15.368769    4226 out.go:239] * 
	* 
	W0728 18:30:15.370374    4226 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:30:15.373706    4226 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0728 18:30:15.360776    4226 out.go:291] Setting OutFile to fd 1 ...
I0728 18:30:15.361015    4226 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 18:30:15.361018    4226 out.go:304] Setting ErrFile to fd 2...
I0728 18:30:15.361021    4226 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 18:30:15.361141    4226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
I0728 18:30:15.361381    4226 mustload.go:65] Loading cluster: multinode-429000
I0728 18:30:15.361575    4226 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 18:30:15.365756    4226 out.go:177] 
W0728 18:30:15.368762    4226 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0728 18:30:15.368769    4226 out.go:239] * 
* 
W0728 18:30:15.370374    4226 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0728 18:30:15.373706    4226 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-429000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr: exit status 7 (30.058958ms)

                                                
                                                
-- stdout --
	multinode-429000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:30:15.407088    4228 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:30:15.407229    4228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:15.407232    4228 out.go:304] Setting ErrFile to fd 2...
	I0728 18:30:15.407234    4228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:15.407373    4228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:30:15.407492    4228 out.go:298] Setting JSON to false
	I0728 18:30:15.407505    4228 mustload.go:65] Loading cluster: multinode-429000
	I0728 18:30:15.407566    4228 notify.go:220] Checking for updates...
	I0728 18:30:15.407734    4228 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:30:15.407739    4228 status.go:255] checking status of multinode-429000 ...
	I0728 18:30:15.407956    4228 status.go:330] multinode-429000 host status = "Stopped" (err=<nil>)
	I0728 18:30:15.407960    4228 status.go:343] host is not running, skipping remaining checks
	I0728 18:30:15.407962    4228 status.go:257] multinode-429000 status: &{Name:multinode-429000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr: exit status 7 (72.620625ms)

                                                
                                                
-- stdout --
	multinode-429000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:30:16.345505    4230 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:30:16.345674    4230 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:16.345678    4230 out.go:304] Setting ErrFile to fd 2...
	I0728 18:30:16.345682    4230 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:16.345850    4230 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:30:16.346002    4230 out.go:298] Setting JSON to false
	I0728 18:30:16.346021    4230 mustload.go:65] Loading cluster: multinode-429000
	I0728 18:30:16.346055    4230 notify.go:220] Checking for updates...
	I0728 18:30:16.346315    4230 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:30:16.346323    4230 status.go:255] checking status of multinode-429000 ...
	I0728 18:30:16.346606    4230 status.go:330] multinode-429000 host status = "Stopped" (err=<nil>)
	I0728 18:30:16.346611    4230 status.go:343] host is not running, skipping remaining checks
	I0728 18:30:16.346614    4230 status.go:257] multinode-429000 status: &{Name:multinode-429000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr: exit status 7 (72.524917ms)

                                                
                                                
-- stdout --
	multinode-429000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:30:17.213196    4232 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:30:17.213368    4232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:17.213373    4232 out.go:304] Setting ErrFile to fd 2...
	I0728 18:30:17.213375    4232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:17.213532    4232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:30:17.213681    4232 out.go:298] Setting JSON to false
	I0728 18:30:17.213693    4232 mustload.go:65] Loading cluster: multinode-429000
	I0728 18:30:17.213731    4232 notify.go:220] Checking for updates...
	I0728 18:30:17.213951    4232 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:30:17.213962    4232 status.go:255] checking status of multinode-429000 ...
	I0728 18:30:17.214255    4232 status.go:330] multinode-429000 host status = "Stopped" (err=<nil>)
	I0728 18:30:17.214260    4232 status.go:343] host is not running, skipping remaining checks
	I0728 18:30:17.214263    4232 status.go:257] multinode-429000 status: &{Name:multinode-429000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr: exit status 7 (72.3655ms)

                                                
                                                
-- stdout --
	multinode-429000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:30:18.844235    4234 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:30:18.844461    4234 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:18.844466    4234 out.go:304] Setting ErrFile to fd 2...
	I0728 18:30:18.844469    4234 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:18.844655    4234 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:30:18.844821    4234 out.go:298] Setting JSON to false
	I0728 18:30:18.844834    4234 mustload.go:65] Loading cluster: multinode-429000
	I0728 18:30:18.844876    4234 notify.go:220] Checking for updates...
	I0728 18:30:18.845110    4234 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:30:18.845118    4234 status.go:255] checking status of multinode-429000 ...
	I0728 18:30:18.845412    4234 status.go:330] multinode-429000 host status = "Stopped" (err=<nil>)
	I0728 18:30:18.845417    4234 status.go:343] host is not running, skipping remaining checks
	I0728 18:30:18.845420    4234 status.go:257] multinode-429000 status: &{Name:multinode-429000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr: exit status 7 (70.83875ms)

                                                
                                                
-- stdout --
	multinode-429000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:30:20.765622    4236 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:30:20.765804    4236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:20.765808    4236 out.go:304] Setting ErrFile to fd 2...
	I0728 18:30:20.765812    4236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:20.765982    4236 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:30:20.766132    4236 out.go:298] Setting JSON to false
	I0728 18:30:20.766144    4236 mustload.go:65] Loading cluster: multinode-429000
	I0728 18:30:20.766189    4236 notify.go:220] Checking for updates...
	I0728 18:30:20.766386    4236 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:30:20.766396    4236 status.go:255] checking status of multinode-429000 ...
	I0728 18:30:20.766676    4236 status.go:330] multinode-429000 host status = "Stopped" (err=<nil>)
	I0728 18:30:20.766681    4236 status.go:343] host is not running, skipping remaining checks
	I0728 18:30:20.766685    4236 status.go:257] multinode-429000 status: &{Name:multinode-429000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr: exit status 7 (72.433ms)

                                                
                                                
-- stdout --
	multinode-429000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:30:23.390953    4240 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:30:23.391177    4240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:23.391182    4240 out.go:304] Setting ErrFile to fd 2...
	I0728 18:30:23.391185    4240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:23.391368    4240 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:30:23.391527    4240 out.go:298] Setting JSON to false
	I0728 18:30:23.391539    4240 mustload.go:65] Loading cluster: multinode-429000
	I0728 18:30:23.391579    4240 notify.go:220] Checking for updates...
	I0728 18:30:23.391791    4240 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:30:23.391798    4240 status.go:255] checking status of multinode-429000 ...
	I0728 18:30:23.392092    4240 status.go:330] multinode-429000 host status = "Stopped" (err=<nil>)
	I0728 18:30:23.392097    4240 status.go:343] host is not running, skipping remaining checks
	I0728 18:30:23.392100    4240 status.go:257] multinode-429000 status: &{Name:multinode-429000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr: exit status 7 (71.497666ms)

                                                
                                                
-- stdout --
	multinode-429000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:30:30.981588    4242 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:30:30.981802    4242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:30.981807    4242 out.go:304] Setting ErrFile to fd 2...
	I0728 18:30:30.981811    4242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:30.982011    4242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:30:30.982174    4242 out.go:298] Setting JSON to false
	I0728 18:30:30.982188    4242 mustload.go:65] Loading cluster: multinode-429000
	I0728 18:30:30.982233    4242 notify.go:220] Checking for updates...
	I0728 18:30:30.982461    4242 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:30:30.982471    4242 status.go:255] checking status of multinode-429000 ...
	I0728 18:30:30.982781    4242 status.go:330] multinode-429000 host status = "Stopped" (err=<nil>)
	I0728 18:30:30.982786    4242 status.go:343] host is not running, skipping remaining checks
	I0728 18:30:30.982789    4242 status.go:257] multinode-429000 status: &{Name:multinode-429000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr: exit status 7 (74.519334ms)

                                                
                                                
-- stdout --
	multinode-429000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:30:43.372140    4249 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:30:43.372393    4249 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:43.372397    4249 out.go:304] Setting ErrFile to fd 2...
	I0728 18:30:43.372401    4249 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:43.372635    4249 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:30:43.372817    4249 out.go:298] Setting JSON to false
	I0728 18:30:43.372831    4249 mustload.go:65] Loading cluster: multinode-429000
	I0728 18:30:43.372878    4249 notify.go:220] Checking for updates...
	I0728 18:30:43.373123    4249 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:30:43.373130    4249 status.go:255] checking status of multinode-429000 ...
	I0728 18:30:43.373435    4249 status.go:330] multinode-429000 host status = "Stopped" (err=<nil>)
	I0728 18:30:43.373440    4249 status.go:343] host is not running, skipping remaining checks
	I0728 18:30:43.373443    4249 status.go:257] multinode-429000 status: &{Name:multinode-429000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr: exit status 7 (67.669375ms)

                                                
                                                
-- stdout --
	multinode-429000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:30:57.352943    4259 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:30:57.353187    4259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:57.353192    4259 out.go:304] Setting ErrFile to fd 2...
	I0728 18:30:57.353196    4259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:30:57.353389    4259 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:30:57.353554    4259 out.go:298] Setting JSON to false
	I0728 18:30:57.353568    4259 mustload.go:65] Loading cluster: multinode-429000
	I0728 18:30:57.353609    4259 notify.go:220] Checking for updates...
	I0728 18:30:57.353840    4259 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:30:57.353849    4259 status.go:255] checking status of multinode-429000 ...
	I0728 18:30:57.354194    4259 status.go:330] multinode-429000 host status = "Stopped" (err=<nil>)
	I0728 18:30:57.354199    4259 status.go:343] host is not running, skipping remaining checks
	I0728 18:30:57.354202    4259 status.go:257] multinode-429000 status: &{Name:multinode-429000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-429000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000: exit status 7 (32.406458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-429000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (42.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-429000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-429000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-429000: (3.57800625s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-429000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-429000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.229506666s)

                                                
                                                
-- stdout --
	* [multinode-429000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-429000" primary control-plane node in "multinode-429000" cluster
	* Restarting existing qemu2 VM for "multinode-429000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-429000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:31:01.053188    4283 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:31:01.053563    4283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:31:01.053569    4283 out.go:304] Setting ErrFile to fd 2...
	I0728 18:31:01.053573    4283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:31:01.053820    4283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:31:01.055476    4283 out.go:298] Setting JSON to false
	I0728 18:31:01.075367    4283 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3632,"bootTime":1722213029,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:31:01.075443    4283 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:31:01.080694    4283 out.go:177] * [multinode-429000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:31:01.087770    4283 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:31:01.087825    4283 notify.go:220] Checking for updates...
	I0728 18:31:01.094653    4283 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:31:01.098428    4283 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:31:01.101664    4283 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:31:01.104680    4283 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:31:01.107658    4283 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:31:01.110929    4283 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:31:01.110999    4283 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:31:01.115701    4283 out.go:177] * Using the qemu2 driver based on existing profile
	I0728 18:31:01.122614    4283 start.go:297] selected driver: qemu2
	I0728 18:31:01.122620    4283 start.go:901] validating driver "qemu2" against &{Name:multinode-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:31:01.122684    4283 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:31:01.125283    4283 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:31:01.125322    4283 cni.go:84] Creating CNI manager for ""
	I0728 18:31:01.125327    4283 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0728 18:31:01.125376    4283 start.go:340] cluster config:
	{Name:multinode-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-429000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:31:01.129304    4283 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:31:01.137654    4283 out.go:177] * Starting "multinode-429000" primary control-plane node in "multinode-429000" cluster
	I0728 18:31:01.141633    4283 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:31:01.141653    4283 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:31:01.141668    4283 cache.go:56] Caching tarball of preloaded images
	I0728 18:31:01.141749    4283 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:31:01.141756    4283 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:31:01.141851    4283 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/multinode-429000/config.json ...
	I0728 18:31:01.142478    4283 start.go:360] acquireMachinesLock for multinode-429000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:31:01.142520    4283 start.go:364] duration metric: took 34.958µs to acquireMachinesLock for "multinode-429000"
	I0728 18:31:01.142532    4283 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:31:01.142537    4283 fix.go:54] fixHost starting: 
	I0728 18:31:01.142683    4283 fix.go:112] recreateIfNeeded on multinode-429000: state=Stopped err=<nil>
	W0728 18:31:01.142693    4283 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:31:01.151643    4283 out.go:177] * Restarting existing qemu2 VM for "multinode-429000" ...
	I0728 18:31:01.155486    4283 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:31:01.155537    4283 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:82:f8:e1:bc:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/disk.qcow2
	I0728 18:31:01.158207    4283 main.go:141] libmachine: STDOUT: 
	I0728 18:31:01.158229    4283 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:31:01.158258    4283 fix.go:56] duration metric: took 15.720542ms for fixHost
	I0728 18:31:01.158271    4283 start.go:83] releasing machines lock for "multinode-429000", held for 15.737875ms
	W0728 18:31:01.158280    4283 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:31:01.158327    4283 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:31:01.158333    4283 start.go:729] Will try again in 5 seconds ...
	I0728 18:31:06.160546    4283 start.go:360] acquireMachinesLock for multinode-429000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:31:06.160946    4283 start.go:364] duration metric: took 318.292µs to acquireMachinesLock for "multinode-429000"
	I0728 18:31:06.161124    4283 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:31:06.161144    4283 fix.go:54] fixHost starting: 
	I0728 18:31:06.161886    4283 fix.go:112] recreateIfNeeded on multinode-429000: state=Stopped err=<nil>
	W0728 18:31:06.161916    4283 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:31:06.171431    4283 out.go:177] * Restarting existing qemu2 VM for "multinode-429000" ...
	I0728 18:31:06.175385    4283 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:31:06.175630    4283 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:82:f8:e1:bc:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/disk.qcow2
	I0728 18:31:06.185047    4283 main.go:141] libmachine: STDOUT: 
	I0728 18:31:06.185117    4283 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:31:06.185196    4283 fix.go:56] duration metric: took 24.048458ms for fixHost
	I0728 18:31:06.185217    4283 start.go:83] releasing machines lock for "multinode-429000", held for 24.24925ms
	W0728 18:31:06.185549    4283 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-429000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-429000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:31:06.193375    4283 out.go:177] 
	W0728 18:31:06.197511    4283 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:31:06.197550    4283 out.go:239] * 
	* 
	W0728 18:31:06.200470    4283 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:31:06.210497    4283 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-429000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-429000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000: exit status 7 (32.653584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-429000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 node delete m03: exit status 83 (38.80025ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-429000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-429000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-429000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 status --alsologtostderr: exit status 7 (28.549166ms)

                                                
                                                
-- stdout --
	multinode-429000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:31:06.391851    4301 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:31:06.392065    4301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:31:06.392068    4301 out.go:304] Setting ErrFile to fd 2...
	I0728 18:31:06.392070    4301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:31:06.392197    4301 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:31:06.392313    4301 out.go:298] Setting JSON to false
	I0728 18:31:06.392322    4301 mustload.go:65] Loading cluster: multinode-429000
	I0728 18:31:06.392392    4301 notify.go:220] Checking for updates...
	I0728 18:31:06.392527    4301 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:31:06.392532    4301 status.go:255] checking status of multinode-429000 ...
	I0728 18:31:06.392730    4301 status.go:330] multinode-429000 host status = "Stopped" (err=<nil>)
	I0728 18:31:06.392734    4301 status.go:343] host is not running, skipping remaining checks
	I0728 18:31:06.392736    4301 status.go:257] multinode-429000 status: &{Name:multinode-429000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-429000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000: exit status 7 (28.907833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-429000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-429000 stop: (3.405636583s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 status: exit status 7 (41.777375ms)

                                                
                                                
-- stdout --
	multinode-429000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-429000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-429000 status --alsologtostderr: exit status 7 (30.994375ms)

                                                
                                                
-- stdout --
	multinode-429000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:31:09.899658    4325 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:31:09.899810    4325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:31:09.899814    4325 out.go:304] Setting ErrFile to fd 2...
	I0728 18:31:09.899816    4325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:31:09.899947    4325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:31:09.900052    4325 out.go:298] Setting JSON to false
	I0728 18:31:09.900067    4325 mustload.go:65] Loading cluster: multinode-429000
	I0728 18:31:09.900105    4325 notify.go:220] Checking for updates...
	I0728 18:31:09.900251    4325 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:31:09.900257    4325 status.go:255] checking status of multinode-429000 ...
	I0728 18:31:09.900468    4325 status.go:330] multinode-429000 host status = "Stopped" (err=<nil>)
	I0728 18:31:09.900472    4325 status.go:343] host is not running, skipping remaining checks
	I0728 18:31:09.900474    4325 status.go:257] multinode-429000 status: &{Name:multinode-429000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-429000 status --alsologtostderr": multinode-429000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-429000 status --alsologtostderr": multinode-429000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000: exit status 7 (29.370166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-429000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-429000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-429000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.18725125s)

                                                
                                                
-- stdout --
	* [multinode-429000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-429000" primary control-plane node in "multinode-429000" cluster
	* Restarting existing qemu2 VM for "multinode-429000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-429000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:31:09.959503    4329 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:31:09.959625    4329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:31:09.959628    4329 out.go:304] Setting ErrFile to fd 2...
	I0728 18:31:09.959631    4329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:31:09.959756    4329 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:31:09.960795    4329 out.go:298] Setting JSON to false
	I0728 18:31:09.977566    4329 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3640,"bootTime":1722213029,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:31:09.977643    4329 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:31:09.982402    4329 out.go:177] * [multinode-429000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:31:09.990364    4329 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:31:09.990398    4329 notify.go:220] Checking for updates...
	I0728 18:31:09.997354    4329 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:31:10.000362    4329 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:31:10.003454    4329 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:31:10.006406    4329 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:31:10.009379    4329 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:31:10.012676    4329 config.go:182] Loaded profile config "multinode-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:31:10.012933    4329 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:31:10.017306    4329 out.go:177] * Using the qemu2 driver based on existing profile
	I0728 18:31:10.024354    4329 start.go:297] selected driver: qemu2
	I0728 18:31:10.024360    4329 start.go:901] validating driver "qemu2" against &{Name:multinode-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:31:10.024400    4329 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:31:10.026520    4329 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:31:10.026554    4329 cni.go:84] Creating CNI manager for ""
	I0728 18:31:10.026559    4329 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0728 18:31:10.026612    4329 start.go:340] cluster config:
	{Name:multinode-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-429000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:31:10.029712    4329 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:31:10.036341    4329 out.go:177] * Starting "multinode-429000" primary control-plane node in "multinode-429000" cluster
	I0728 18:31:10.040399    4329 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:31:10.040416    4329 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:31:10.040425    4329 cache.go:56] Caching tarball of preloaded images
	I0728 18:31:10.040488    4329 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:31:10.040494    4329 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:31:10.040555    4329 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/multinode-429000/config.json ...
	I0728 18:31:10.040913    4329 start.go:360] acquireMachinesLock for multinode-429000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:31:10.040951    4329 start.go:364] duration metric: took 30.917µs to acquireMachinesLock for "multinode-429000"
	I0728 18:31:10.040962    4329 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:31:10.040967    4329 fix.go:54] fixHost starting: 
	I0728 18:31:10.041091    4329 fix.go:112] recreateIfNeeded on multinode-429000: state=Stopped err=<nil>
	W0728 18:31:10.041099    4329 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:31:10.049417    4329 out.go:177] * Restarting existing qemu2 VM for "multinode-429000" ...
	I0728 18:31:10.053383    4329 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:31:10.053418    4329 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:82:f8:e1:bc:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/disk.qcow2
	I0728 18:31:10.055264    4329 main.go:141] libmachine: STDOUT: 
	I0728 18:31:10.055280    4329 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:31:10.055315    4329 fix.go:56] duration metric: took 14.34025ms for fixHost
	I0728 18:31:10.055318    4329 start.go:83] releasing machines lock for "multinode-429000", held for 14.3625ms
	W0728 18:31:10.055325    4329 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:31:10.055364    4329 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:31:10.055368    4329 start.go:729] Will try again in 5 seconds ...
	I0728 18:31:15.057678    4329 start.go:360] acquireMachinesLock for multinode-429000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:31:15.058251    4329 start.go:364] duration metric: took 451.958µs to acquireMachinesLock for "multinode-429000"
	I0728 18:31:15.058423    4329 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:31:15.058444    4329 fix.go:54] fixHost starting: 
	I0728 18:31:15.059255    4329 fix.go:112] recreateIfNeeded on multinode-429000: state=Stopped err=<nil>
	W0728 18:31:15.059280    4329 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:31:15.064741    4329 out.go:177] * Restarting existing qemu2 VM for "multinode-429000" ...
	I0728 18:31:15.073618    4329 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:31:15.073822    4329 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:82:f8:e1:bc:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/multinode-429000/disk.qcow2
	I0728 18:31:15.083211    4329 main.go:141] libmachine: STDOUT: 
	I0728 18:31:15.083270    4329 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:31:15.083348    4329 fix.go:56] duration metric: took 24.905042ms for fixHost
	I0728 18:31:15.083361    4329 start.go:83] releasing machines lock for "multinode-429000", held for 25.08625ms
	W0728 18:31:15.083508    4329 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-429000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-429000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:31:15.091805    4329 out.go:177] 
	W0728 18:31:15.095833    4329 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:31:15.095862    4329 out.go:239] * 
	* 
	W0728 18:31:15.098710    4329 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:31:15.105760    4329 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-429000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000: exit status 7 (70.393167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-429000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-429000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-429000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-429000-m01 --driver=qemu2 : exit status 80 (10.064584083s)

                                                
                                                
-- stdout --
	* [multinode-429000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-429000-m01" primary control-plane node in "multinode-429000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-429000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-429000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-429000-m02 --driver=qemu2 
E0728 18:31:33.754016    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-429000-m02 --driver=qemu2 : exit status 80 (10.217437166s)

                                                
                                                
-- stdout --
	* [multinode-429000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-429000-m02" primary control-plane node in "multinode-429000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-429000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-429000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-429000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-429000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-429000: exit status 83 (82.291958ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-429000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-429000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-429000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-429000 -n multinode-429000: exit status 7 (30.219292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-429000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.51s)

                                                
                                    
x
+
TestPreload (10.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-592000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-592000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.019034625s)

                                                
                                                
-- stdout --
	* [test-preload-592000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-592000" primary control-plane node in "test-preload-592000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-592000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:31:35.832119    4390 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:31:35.832234    4390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:31:35.832237    4390 out.go:304] Setting ErrFile to fd 2...
	I0728 18:31:35.832240    4390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:31:35.832372    4390 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:31:35.833429    4390 out.go:298] Setting JSON to false
	I0728 18:31:35.849352    4390 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3666,"bootTime":1722213029,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:31:35.849417    4390 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:31:35.855657    4390 out.go:177] * [test-preload-592000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:31:35.863564    4390 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:31:35.863651    4390 notify.go:220] Checking for updates...
	I0728 18:31:35.870653    4390 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:31:35.873568    4390 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:31:35.876584    4390 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:31:35.879619    4390 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:31:35.882540    4390 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:31:35.885891    4390 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:31:35.885959    4390 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:31:35.890552    4390 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:31:35.897532    4390 start.go:297] selected driver: qemu2
	I0728 18:31:35.897537    4390 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:31:35.897543    4390 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:31:35.899898    4390 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:31:35.903576    4390 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:31:35.906663    4390 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:31:35.906712    4390 cni.go:84] Creating CNI manager for ""
	I0728 18:31:35.906724    4390 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:31:35.906736    4390 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 18:31:35.906763    4390 start.go:340] cluster config:
	{Name:test-preload-592000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-592000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:31:35.910518    4390 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:31:35.917573    4390 out.go:177] * Starting "test-preload-592000" primary control-plane node in "test-preload-592000" cluster
	I0728 18:31:35.921541    4390 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0728 18:31:35.921639    4390 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/test-preload-592000/config.json ...
	I0728 18:31:35.921670    4390 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/test-preload-592000/config.json: {Name:mk1b4bd2b3d674d297db1076d474c15b7c303e65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:31:35.921665    4390 cache.go:107] acquiring lock: {Name:mk7b1b69c1606f1420fea70fdfc405dc8ede5ad8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:31:35.921674    4390 cache.go:107] acquiring lock: {Name:mk804c6d0364ddb1b01913aec2df2031d2778a94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:31:35.921674    4390 cache.go:107] acquiring lock: {Name:mk90d2d103ae9873b21adff521d1ba701384b4d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:31:35.921705    4390 cache.go:107] acquiring lock: {Name:mk20711ecb13df44c696a0b13461feaac694fe99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:31:35.921938    4390 start.go:360] acquireMachinesLock for test-preload-592000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:31:35.921937    4390 cache.go:107] acquiring lock: {Name:mk06f0b1bfc69d11854dc277d0362a61743a07c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:31:35.921979    4390 start.go:364] duration metric: took 30.5µs to acquireMachinesLock for "test-preload-592000"
	I0728 18:31:35.922020    4390 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0728 18:31:35.921969    4390 cache.go:107] acquiring lock: {Name:mk5a343ae5009abe0d69aac3b6bb4e58dd5691a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:31:35.921979    4390 cache.go:107] acquiring lock: {Name:mk8b91d534af263fed6bfc4c3e56b780c251605f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:31:35.922060    4390 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:31:35.922069    4390 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0728 18:31:35.922095    4390 cache.go:107] acquiring lock: {Name:mk5afbbec245606df65caeb2e17fdd66727c8645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:31:35.922119    4390 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0728 18:31:35.922088    4390 start.go:93] Provisioning new machine with config: &{Name:test-preload-592000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-592000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:31:35.922165    4390 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:31:35.922208    4390 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0728 18:31:35.922224    4390 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0728 18:31:35.922371    4390 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0728 18:31:35.922661    4390 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:31:35.929574    4390 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:31:35.933335    4390 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:31:35.933399    4390 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0728 18:31:35.934279    4390 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0728 18:31:35.934362    4390 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0728 18:31:35.934736    4390 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0728 18:31:35.936255    4390 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:31:35.936330    4390 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0728 18:31:35.936377    4390 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0728 18:31:35.947320    4390 start.go:159] libmachine.API.Create for "test-preload-592000" (driver="qemu2")
	I0728 18:31:35.947340    4390 client.go:168] LocalClient.Create starting
	I0728 18:31:35.947412    4390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:31:35.947444    4390 main.go:141] libmachine: Decoding PEM data...
	I0728 18:31:35.947453    4390 main.go:141] libmachine: Parsing certificate...
	I0728 18:31:35.947489    4390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:31:35.947516    4390 main.go:141] libmachine: Decoding PEM data...
	I0728 18:31:35.947523    4390 main.go:141] libmachine: Parsing certificate...
	I0728 18:31:35.947902    4390 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:31:36.103038    4390 main.go:141] libmachine: Creating SSH key...
	I0728 18:31:36.279992    4390 main.go:141] libmachine: Creating Disk image...
	I0728 18:31:36.280021    4390 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:31:36.280265    4390 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/disk.qcow2
	I0728 18:31:36.289822    4390 main.go:141] libmachine: STDOUT: 
	I0728 18:31:36.289858    4390 main.go:141] libmachine: STDERR: 
	I0728 18:31:36.289968    4390 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/disk.qcow2 +20000M
	I0728 18:31:36.299062    4390 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:31:36.299081    4390 main.go:141] libmachine: STDERR: 
	I0728 18:31:36.299093    4390 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/disk.qcow2
	I0728 18:31:36.299100    4390 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:31:36.299115    4390 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:31:36.299146    4390 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:dc:02:cf:34:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/disk.qcow2
	I0728 18:31:36.301110    4390 main.go:141] libmachine: STDOUT: 
	I0728 18:31:36.301139    4390 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:31:36.301158    4390 client.go:171] duration metric: took 353.814125ms to LocalClient.Create
	I0728 18:31:36.555678    4390 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0728 18:31:36.562023    4390 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0728 18:31:36.585873    4390 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0728 18:31:36.606199    4390 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0728 18:31:36.614121    4390 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0728 18:31:36.624040    4390 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0728 18:31:36.626622    4390 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0728 18:31:36.626679    4390 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	W0728 18:31:36.759161    4390 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0728 18:31:36.759256    4390 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0728 18:31:36.847738    4390 cache.go:157] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0728 18:31:36.847789    4390 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 925.953583ms
	I0728 18:31:36.847843    4390 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0728 18:31:37.057828    4390 cache.go:157] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0728 18:31:37.057881    4390 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.136213083s
	I0728 18:31:37.058505    4390 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0728 18:31:38.301385    4390 start.go:128] duration metric: took 2.379193125s to createHost
	I0728 18:31:38.301449    4390 start.go:83] releasing machines lock for "test-preload-592000", held for 2.37945625s
	W0728 18:31:38.301532    4390 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:31:38.317557    4390 out.go:177] * Deleting "test-preload-592000" in qemu2 ...
	W0728 18:31:38.351027    4390 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:31:38.351062    4390 start.go:729] Will try again in 5 seconds ...
	I0728 18:31:39.250307    4390 cache.go:157] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0728 18:31:39.250357    4390 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.328687791s
	I0728 18:31:39.250381    4390 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0728 18:31:39.736172    4390 cache.go:157] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0728 18:31:39.736222    4390 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.814123583s
	I0728 18:31:39.736282    4390 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0728 18:31:40.063777    4390 cache.go:157] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0728 18:31:40.063861    4390 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.142193458s
	I0728 18:31:40.063891    4390 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0728 18:31:40.505156    4390 cache.go:157] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0728 18:31:40.505202    4390 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.583493667s
	I0728 18:31:40.505261    4390 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0728 18:31:41.863592    4390 cache.go:157] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0728 18:31:41.863637    4390 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.941750084s
	I0728 18:31:41.863668    4390 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0728 18:31:43.351306    4390 start.go:360] acquireMachinesLock for test-preload-592000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:31:43.351724    4390 start.go:364] duration metric: took 344.5µs to acquireMachinesLock for "test-preload-592000"
	I0728 18:31:43.351824    4390 start.go:93] Provisioning new machine with config: &{Name:test-preload-592000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-592000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:31:43.352068    4390 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:31:43.360719    4390 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:31:43.412277    4390 start.go:159] libmachine.API.Create for "test-preload-592000" (driver="qemu2")
	I0728 18:31:43.412321    4390 client.go:168] LocalClient.Create starting
	I0728 18:31:43.412428    4390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:31:43.412491    4390 main.go:141] libmachine: Decoding PEM data...
	I0728 18:31:43.412515    4390 main.go:141] libmachine: Parsing certificate...
	I0728 18:31:43.412585    4390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:31:43.412630    4390 main.go:141] libmachine: Decoding PEM data...
	I0728 18:31:43.412647    4390 main.go:141] libmachine: Parsing certificate...
	I0728 18:31:43.413179    4390 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:31:43.574136    4390 main.go:141] libmachine: Creating SSH key...
	I0728 18:31:43.761298    4390 main.go:141] libmachine: Creating Disk image...
	I0728 18:31:43.761305    4390 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:31:43.761562    4390 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/disk.qcow2
	I0728 18:31:43.771116    4390 main.go:141] libmachine: STDOUT: 
	I0728 18:31:43.771143    4390 main.go:141] libmachine: STDERR: 
	I0728 18:31:43.771192    4390 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/disk.qcow2 +20000M
	I0728 18:31:43.779452    4390 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:31:43.779476    4390 main.go:141] libmachine: STDERR: 
	I0728 18:31:43.779491    4390 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/disk.qcow2
	I0728 18:31:43.779494    4390 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:31:43.779502    4390 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:31:43.779542    4390 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:c5:2e:84:d9:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/test-preload-592000/disk.qcow2
	I0728 18:31:43.781256    4390 main.go:141] libmachine: STDOUT: 
	I0728 18:31:43.781270    4390 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:31:43.781288    4390 client.go:171] duration metric: took 368.962042ms to LocalClient.Create
	I0728 18:31:45.781860    4390 start.go:128] duration metric: took 2.429731709s to createHost
	I0728 18:31:45.781943    4390 start.go:83] releasing machines lock for "test-preload-592000", held for 2.430191541s
	W0728 18:31:45.782201    4390 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-592000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-592000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:31:45.791379    4390 out.go:177] 
	W0728 18:31:45.795618    4390 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:31:45.795644    4390 out.go:239] * 
	* 
	W0728 18:31:45.798292    4390 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:31:45.808592    4390 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-592000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-28 18:31:45.826884 -0700 PDT m=+2756.383499751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-592000 -n test-preload-592000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-592000 -n test-preload-592000: exit status 7 (65.341542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-592000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-592000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-592000
--- FAIL: TestPreload (10.16s)

                                                
                                    
x
+
TestScheduledStopUnix (9.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-328000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-328000 --memory=2048 --driver=qemu2 : exit status 80 (9.763786459s)

                                                
                                                
-- stdout --
	* [scheduled-stop-328000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-328000" primary control-plane node in "scheduled-stop-328000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-328000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-328000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-328000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-328000" primary control-plane node in "scheduled-stop-328000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-328000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-328000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-28 18:31:55.730729 -0700 PDT m=+2766.287349168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-328000 -n scheduled-stop-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-328000 -n scheduled-stop-328000: exit status 7 (66.474209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-328000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-328000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-328000
--- FAIL: TestScheduledStopUnix (9.91s)

                                                
                                    
x
+
TestSkaffold (12.96s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3168376949 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3168376949 version: (1.073153625s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-219000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-219000 --memory=2600 --driver=qemu2 : exit status 80 (9.815601792s)

                                                
                                                
-- stdout --
	* [skaffold-219000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-219000" primary control-plane node in "skaffold-219000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-219000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-219000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-219000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-219000" primary control-plane node in "skaffold-219000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-219000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-219000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-28 18:32:08.686396 -0700 PDT m=+2779.243020918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-219000 -n skaffold-219000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-219000 -n skaffold-219000: exit status 7 (63.894917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-219000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-219000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-219000
--- FAIL: TestSkaffold (12.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (589.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4172146902 start -p running-upgrade-638000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4172146902 start -p running-upgrade-638000 --memory=2200 --vm-driver=qemu2 : (53.219065666s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-638000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0728 18:34:36.824444    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:34:56.766529    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-638000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.883279625s)

                                                
                                                
-- stdout --
	* [running-upgrade-638000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-638000" primary control-plane node in "running-upgrade-638000" cluster
	* Updating the running qemu2 "running-upgrade-638000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:33:48.891474    4787 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:33:48.891601    4787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:33:48.891607    4787 out.go:304] Setting ErrFile to fd 2...
	I0728 18:33:48.891610    4787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:33:48.891754    4787 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:33:48.892827    4787 out.go:298] Setting JSON to false
	I0728 18:33:48.909676    4787 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3799,"bootTime":1722213029,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:33:48.909746    4787 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:33:48.914266    4787 out.go:177] * [running-upgrade-638000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:33:48.921326    4787 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:33:48.921377    4787 notify.go:220] Checking for updates...
	I0728 18:33:48.929281    4787 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:33:48.933283    4787 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:33:48.936248    4787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:33:48.939259    4787 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:33:48.942297    4787 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:33:48.945427    4787 config.go:182] Loaded profile config "running-upgrade-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:33:48.948208    4787 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0728 18:33:48.951316    4787 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:33:48.954208    4787 out.go:177] * Using the qemu2 driver based on existing profile
	I0728 18:33:48.961258    4787 start.go:297] selected driver: qemu2
	I0728 18:33:48.961263    4787 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-638000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50281 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-638000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0728 18:33:48.961306    4787 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:33:48.963442    4787 cni.go:84] Creating CNI manager for ""
	I0728 18:33:48.963459    4787 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:33:48.963484    4787 start.go:340] cluster config:
	{Name:running-upgrade-638000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50281 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-638000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0728 18:33:48.963535    4787 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:33:48.971226    4787 out.go:177] * Starting "running-upgrade-638000" primary control-plane node in "running-upgrade-638000" cluster
	I0728 18:33:48.974218    4787 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0728 18:33:48.974230    4787 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0728 18:33:48.974238    4787 cache.go:56] Caching tarball of preloaded images
	I0728 18:33:48.974292    4787 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:33:48.974307    4787 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0728 18:33:48.974360    4787 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/config.json ...
	I0728 18:33:48.974804    4787 start.go:360] acquireMachinesLock for running-upgrade-638000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:33:48.974836    4787 start.go:364] duration metric: took 26.333µs to acquireMachinesLock for "running-upgrade-638000"
	I0728 18:33:48.974844    4787 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:33:48.974849    4787 fix.go:54] fixHost starting: 
	I0728 18:33:48.975453    4787 fix.go:112] recreateIfNeeded on running-upgrade-638000: state=Running err=<nil>
	W0728 18:33:48.975461    4787 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:33:48.983186    4787 out.go:177] * Updating the running qemu2 "running-upgrade-638000" VM ...
	I0728 18:33:48.987226    4787 machine.go:94] provisionDockerMachine start ...
	I0728 18:33:48.987261    4787 main.go:141] libmachine: Using SSH client type: native
	I0728 18:33:48.987363    4787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101096a10] 0x101099270 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0728 18:33:48.987367    4787 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 18:33:49.057227    4787 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-638000
	
	I0728 18:33:49.057243    4787 buildroot.go:166] provisioning hostname "running-upgrade-638000"
	I0728 18:33:49.057286    4787 main.go:141] libmachine: Using SSH client type: native
	I0728 18:33:49.057395    4787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101096a10] 0x101099270 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0728 18:33:49.057401    4787 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-638000 && echo "running-upgrade-638000" | sudo tee /etc/hostname
	I0728 18:33:49.130982    4787 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-638000
	
	I0728 18:33:49.131037    4787 main.go:141] libmachine: Using SSH client type: native
	I0728 18:33:49.131151    4787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101096a10] 0x101099270 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0728 18:33:49.131162    4787 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-638000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-638000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-638000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 18:33:49.201200    4787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:33:49.201212    4787 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1229/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1229/.minikube}
	I0728 18:33:49.201221    4787 buildroot.go:174] setting up certificates
	I0728 18:33:49.201226    4787 provision.go:84] configureAuth start
	I0728 18:33:49.201234    4787 provision.go:143] copyHostCerts
	I0728 18:33:49.201297    4787 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.pem, removing ...
	I0728 18:33:49.201306    4787 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.pem
	I0728 18:33:49.201425    4787 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.pem (1082 bytes)
	I0728 18:33:49.201610    4787 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1229/.minikube/cert.pem, removing ...
	I0728 18:33:49.201615    4787 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1229/.minikube/cert.pem
	I0728 18:33:49.201665    4787 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1229/.minikube/cert.pem (1123 bytes)
	I0728 18:33:49.201766    4787 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1229/.minikube/key.pem, removing ...
	I0728 18:33:49.201769    4787 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1229/.minikube/key.pem
	I0728 18:33:49.201807    4787 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1229/.minikube/key.pem (1679 bytes)
	I0728 18:33:49.201901    4787 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-638000 san=[127.0.0.1 localhost minikube running-upgrade-638000]
	I0728 18:33:49.348082    4787 provision.go:177] copyRemoteCerts
	I0728 18:33:49.348129    4787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 18:33:49.348138    4787 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/running-upgrade-638000/id_rsa Username:docker}
	I0728 18:33:49.385898    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 18:33:49.392844    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0728 18:33:49.401741    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 18:33:49.408711    4787 provision.go:87] duration metric: took 207.480708ms to configureAuth
	I0728 18:33:49.408724    4787 buildroot.go:189] setting minikube options for container-runtime
	I0728 18:33:49.408840    4787 config.go:182] Loaded profile config "running-upgrade-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:33:49.408872    4787 main.go:141] libmachine: Using SSH client type: native
	I0728 18:33:49.408962    4787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101096a10] 0x101099270 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0728 18:33:49.408967    4787 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 18:33:49.479542    4787 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 18:33:49.479552    4787 buildroot.go:70] root file system type: tmpfs
	I0728 18:33:49.479609    4787 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 18:33:49.479658    4787 main.go:141] libmachine: Using SSH client type: native
	I0728 18:33:49.479771    4787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101096a10] 0x101099270 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0728 18:33:49.479805    4787 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 18:33:49.556471    4787 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 18:33:49.556530    4787 main.go:141] libmachine: Using SSH client type: native
	I0728 18:33:49.556665    4787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101096a10] 0x101099270 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0728 18:33:49.556672    4787 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 18:33:49.628186    4787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:33:49.628197    4787 machine.go:97] duration metric: took 640.966166ms to provisionDockerMachine
	I0728 18:33:49.628205    4787 start.go:293] postStartSetup for "running-upgrade-638000" (driver="qemu2")
	I0728 18:33:49.628216    4787 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 18:33:49.628263    4787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 18:33:49.628271    4787 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/running-upgrade-638000/id_rsa Username:docker}
	I0728 18:33:49.667126    4787 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 18:33:49.668714    4787 info.go:137] Remote host: Buildroot 2021.02.12
	I0728 18:33:49.668723    4787 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1229/.minikube/addons for local assets ...
	I0728 18:33:49.668806    4787 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1229/.minikube/files for local assets ...
	I0728 18:33:49.668896    4787 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1229/.minikube/files/etc/ssl/certs/17282.pem -> 17282.pem in /etc/ssl/certs
	I0728 18:33:49.668992    4787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 18:33:49.671492    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/files/etc/ssl/certs/17282.pem --> /etc/ssl/certs/17282.pem (1708 bytes)
	I0728 18:33:49.678332    4787 start.go:296] duration metric: took 50.122042ms for postStartSetup
	I0728 18:33:49.678346    4787 fix.go:56] duration metric: took 703.498ms for fixHost
	I0728 18:33:49.678379    4787 main.go:141] libmachine: Using SSH client type: native
	I0728 18:33:49.678491    4787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101096a10] 0x101099270 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0728 18:33:49.678495    4787 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0728 18:33:49.748492    4787 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722216829.711324013
	
	I0728 18:33:49.748503    4787 fix.go:216] guest clock: 1722216829.711324013
	I0728 18:33:49.748507    4787 fix.go:229] Guest: 2024-07-28 18:33:49.711324013 -0700 PDT Remote: 2024-07-28 18:33:49.678347 -0700 PDT m=+0.806563293 (delta=32.977013ms)
	I0728 18:33:49.748519    4787 fix.go:200] guest clock delta is within tolerance: 32.977013ms
	I0728 18:33:49.748522    4787 start.go:83] releasing machines lock for "running-upgrade-638000", held for 773.682417ms
	I0728 18:33:49.748590    4787 ssh_runner.go:195] Run: cat /version.json
	I0728 18:33:49.748601    4787 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/running-upgrade-638000/id_rsa Username:docker}
	I0728 18:33:49.748591    4787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 18:33:49.748631    4787 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/running-upgrade-638000/id_rsa Username:docker}
	W0728 18:33:49.749135    4787 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50354->127.0.0.1:50249: write: broken pipe
	I0728 18:33:49.749153    4787 retry.go:31] will retry after 213.022374ms: ssh: handshake failed: write tcp 127.0.0.1:50354->127.0.0.1:50249: write: broken pipe
	W0728 18:33:49.783256    4787 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0728 18:33:49.783298    4787 ssh_runner.go:195] Run: systemctl --version
	I0728 18:33:49.785078    4787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0728 18:33:49.786554    4787 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 18:33:49.786583    4787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0728 18:33:49.789917    4787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0728 18:33:49.795459    4787 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 18:33:49.795468    4787 start.go:495] detecting cgroup driver to use...
	I0728 18:33:49.795539    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:33:49.800570    4787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0728 18:33:49.803576    4787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 18:33:49.807251    4787 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 18:33:49.807278    4787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 18:33:49.810224    4787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:33:49.813271    4787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 18:33:49.816242    4787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:33:49.818962    4787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 18:33:49.822260    4787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 18:33:49.825674    4787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 18:33:49.828733    4787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 18:33:49.831520    4787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 18:33:49.834266    4787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 18:33:49.837435    4787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:33:49.928468    4787 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 18:33:49.936716    4787 start.go:495] detecting cgroup driver to use...
	I0728 18:33:49.936785    4787 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 18:33:49.942716    4787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:33:49.949274    4787 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 18:33:49.956999    4787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:33:49.961620    4787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:33:49.966549    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:33:49.972940    4787 ssh_runner.go:195] Run: which cri-dockerd
	I0728 18:33:49.974229    4787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 18:33:49.977485    4787 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 18:33:49.982446    4787 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 18:33:50.074169    4787 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 18:33:50.173830    4787 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 18:33:50.173895    4787 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 18:33:50.179399    4787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:33:50.270515    4787 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:33:52.983487    4787 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.712955792s)
	I0728 18:33:52.983569    4787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0728 18:33:52.988533    4787 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0728 18:33:52.994777    4787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:33:52.999588    4787 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0728 18:33:53.087617    4787 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 18:33:53.165586    4787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:33:53.243059    4787 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0728 18:33:53.249357    4787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:33:53.254326    4787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:33:53.334742    4787 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0728 18:33:53.373833    4787 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 18:33:53.373901    4787 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 18:33:53.376948    4787 start.go:563] Will wait 60s for crictl version
	I0728 18:33:53.377003    4787 ssh_runner.go:195] Run: which crictl
	I0728 18:33:53.378309    4787 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0728 18:33:53.390066    4787 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0728 18:33:53.390150    4787 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:33:53.402622    4787 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:33:53.423136    4787 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0728 18:33:53.423199    4787 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0728 18:33:53.424752    4787 kubeadm.go:883] updating cluster {Name:running-upgrade-638000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50281 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-638000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0728 18:33:53.424799    4787 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0728 18:33:53.424841    4787 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 18:33:53.435773    4787 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0728 18:33:53.435785    4787 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0728 18:33:53.435832    4787 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0728 18:33:53.438792    4787 ssh_runner.go:195] Run: which lz4
	I0728 18:33:53.439998    4787 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0728 18:33:53.441158    4787 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0728 18:33:53.441180    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0728 18:33:54.397975    4787 docker.go:649] duration metric: took 958.014292ms to copy over tarball
	I0728 18:33:54.398045    4787 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0728 18:33:55.538176    4787 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.140114541s)
	I0728 18:33:55.538191    4787 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0728 18:33:55.554904    4787 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0728 18:33:55.558391    4787 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0728 18:33:55.563573    4787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:33:55.643994    4787 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:33:56.819242    4787 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.175231291s)
	I0728 18:33:56.819338    4787 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 18:33:56.831710    4787 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0728 18:33:56.831721    4787 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0728 18:33:56.831727    4787 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0728 18:33:56.836784    4787 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:33:56.838578    4787 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0728 18:33:56.840703    4787 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0728 18:33:56.841047    4787 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:33:56.842449    4787 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0728 18:33:56.842596    4787 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0728 18:33:56.843671    4787 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0728 18:33:56.844061    4787 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0728 18:33:56.845147    4787 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0728 18:33:56.845267    4787 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0728 18:33:56.846917    4787 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0728 18:33:56.846939    4787 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0728 18:33:56.847847    4787 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:33:56.848216    4787 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0728 18:33:56.849324    4787 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0728 18:33:56.849674    4787 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:33:57.264621    4787 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0728 18:33:57.264621    4787 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0728 18:33:57.272089    4787 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0728 18:33:57.274390    4787 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0728 18:33:57.283951    4787 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0728 18:33:57.283979    4787 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0728 18:33:57.284044    4787 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0728 18:33:57.287314    4787 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0728 18:33:57.287330    4787 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0728 18:33:57.287373    4787 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0728 18:33:57.295169    4787 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0728 18:33:57.303625    4787 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0728 18:33:57.303656    4787 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0728 18:33:57.303713    4787 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0728 18:33:57.305275    4787 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0728 18:33:57.305287    4787 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0728 18:33:57.305321    4787 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	W0728 18:33:57.311235    4787 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0728 18:33:57.311361    4787 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:33:57.314678    4787 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0728 18:33:57.315357    4787 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0728 18:33:57.325094    4787 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0728 18:33:57.346994    4787 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0728 18:33:57.346994    4787 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0728 18:33:57.347032    4787 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0728 18:33:57.347047    4787 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:33:57.347093    4787 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:33:57.347130    4787 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0728 18:33:57.347144    4787 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0728 18:33:57.347166    4787 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0728 18:33:57.349729    4787 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0728 18:33:57.349743    4787 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0728 18:33:57.349780    4787 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0728 18:33:57.367468    4787 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0728 18:33:57.367497    4787 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0728 18:33:57.367520    4787 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0728 18:33:57.367590    4787 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0728 18:33:57.367590    4787 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0728 18:33:57.369352    4787 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0728 18:33:57.369371    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0728 18:33:57.369459    4787 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0728 18:33:57.369468    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0728 18:33:57.390336    4787 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0728 18:33:57.390348    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0728 18:33:57.436564    4787 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0728 18:33:57.436591    4787 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0728 18:33:57.436599    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0728 18:33:57.448554    4787 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0728 18:33:57.448667    4787 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:33:57.482538    4787 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0728 18:33:57.482568    4787 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0728 18:33:57.482591    4787 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:33:57.482638    4787 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:33:57.500248    4787 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0728 18:33:57.500366    4787 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0728 18:33:57.501682    4787 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0728 18:33:57.501691    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0728 18:33:57.532933    4787 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0728 18:33:57.532946    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0728 18:33:57.766344    4787 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0728 18:33:57.766380    4787 cache_images.go:92] duration metric: took 934.647125ms to LoadCachedImages
	W0728 18:33:57.766423    4787 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0728 18:33:57.766428    4787 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0728 18:33:57.766479    4787 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-638000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-638000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0728 18:33:57.766565    4787 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 18:33:57.780472    4787 cni.go:84] Creating CNI manager for ""
	I0728 18:33:57.780484    4787 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:33:57.780493    4787 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0728 18:33:57.780503    4787 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-638000 NodeName:running-upgrade-638000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0728 18:33:57.780564    4787 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-638000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 18:33:57.780618    4787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0728 18:33:57.783514    4787 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 18:33:57.783542    4787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 18:33:57.786391    4787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0728 18:33:57.791541    4787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 18:33:57.796369    4787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0728 18:33:57.801748    4787 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0728 18:33:57.803338    4787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:33:57.887236    4787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:33:57.892678    4787 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000 for IP: 10.0.2.15
	I0728 18:33:57.892686    4787 certs.go:194] generating shared ca certs ...
	I0728 18:33:57.892694    4787 certs.go:226] acquiring lock for ca certs: {Name:mkc846ff99a644cdf9e42c80143f563c1808731e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:33:57.892863    4787 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.key
	I0728 18:33:57.892914    4787 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/proxy-client-ca.key
	I0728 18:33:57.892920    4787 certs.go:256] generating profile certs ...
	I0728 18:33:57.892994    4787 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/client.key
	I0728 18:33:57.893009    4787 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/apiserver.key.db1c487b
	I0728 18:33:57.893021    4787 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/apiserver.crt.db1c487b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0728 18:33:57.937186    4787 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/apiserver.crt.db1c487b ...
	I0728 18:33:57.937192    4787 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/apiserver.crt.db1c487b: {Name:mkda8674a6c6fe58f43f44296e0ae9e5125f1fa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:33:57.938737    4787 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/apiserver.key.db1c487b ...
	I0728 18:33:57.938743    4787 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/apiserver.key.db1c487b: {Name:mk52a05e8a8a5ba169a38ce7f0a3954a3379a5b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:33:57.938896    4787 certs.go:381] copying /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/apiserver.crt.db1c487b -> /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/apiserver.crt
	I0728 18:33:57.939055    4787 certs.go:385] copying /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/apiserver.key.db1c487b -> /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/apiserver.key
	I0728 18:33:57.939395    4787 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/proxy-client.key
	I0728 18:33:57.939556    4787 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/1728.pem (1338 bytes)
	W0728 18:33:57.939591    4787 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/1728_empty.pem, impossibly tiny 0 bytes
	I0728 18:33:57.939596    4787 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca-key.pem (1675 bytes)
	I0728 18:33:57.939622    4787 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem (1082 bytes)
	I0728 18:33:57.939650    4787 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem (1123 bytes)
	I0728 18:33:57.939678    4787 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/key.pem (1679 bytes)
	I0728 18:33:57.939904    4787 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/files/etc/ssl/certs/17282.pem (1708 bytes)
	I0728 18:33:57.940280    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 18:33:57.946937    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 18:33:57.954390    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 18:33:57.962069    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0728 18:33:57.969485    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0728 18:33:57.976357    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 18:33:57.983221    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 18:33:57.990345    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0728 18:33:57.997573    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0728 18:33:58.004213    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/files/etc/ssl/certs/17282.pem --> /usr/share/ca-certificates/17282.pem (1708 bytes)
	I0728 18:33:58.010740    4787 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 18:33:58.017496    4787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 18:33:58.022160    4787 ssh_runner.go:195] Run: openssl version
	I0728 18:33:58.023992    4787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17282.pem && ln -fs /usr/share/ca-certificates/17282.pem /etc/ssl/certs/17282.pem"
	I0728 18:33:58.026803    4787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17282.pem
	I0728 18:33:58.028192    4787 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:54 /usr/share/ca-certificates/17282.pem
	I0728 18:33:58.028215    4787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17282.pem
	I0728 18:33:58.029871    4787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17282.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 18:33:58.032646    4787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 18:33:58.035544    4787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:33:58.036938    4787 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:46 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:33:58.036959    4787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:33:58.038749    4787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 18:33:58.041749    4787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0728 18:33:58.044988    4787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0728 18:33:58.046319    4787 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:54 /usr/share/ca-certificates/1728.pem
	I0728 18:33:58.046337    4787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0728 18:33:58.048155    4787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0728 18:33:58.050772    4787 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0728 18:33:58.052411    4787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0728 18:33:58.054048    4787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0728 18:33:58.055824    4787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0728 18:33:58.057648    4787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0728 18:33:58.059547    4787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0728 18:33:58.061201    4787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0728 18:33:58.062950    4787 kubeadm.go:392] StartCluster: {Name:running-upgrade-638000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50281 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-638000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0728 18:33:58.063011    4787 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 18:33:58.073368    4787 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 18:33:58.076646    4787 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0728 18:33:58.076652    4787 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0728 18:33:58.076676    4787 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 18:33:58.079243    4787 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 18:33:58.079483    4787 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-638000" does not appear in /Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:33:58.079535    4787 kubeconfig.go:62] /Users/jenkins/minikube-integration/19312-1229/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-638000" cluster setting kubeconfig missing "running-upgrade-638000" context setting]
	I0728 18:33:58.079704    4787 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/kubeconfig: {Name:mk193de249a2c701b098e889c731f2b64761e39f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:33:58.080399    4787 kapi.go:59] client config for running-upgrade-638000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10242c5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:33:58.080724    4787 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 18:33:58.083409    4787 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-638000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0728 18:33:58.083414    4787 kubeadm.go:1160] stopping kube-system containers ...
	I0728 18:33:58.083451    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 18:33:58.094673    4787 docker.go:483] Stopping containers: [0d0d94b0c44c bf22a96efc77 fc8f8514a17c 7749f4fd6625 8369608b1758 58e1b88fc31d c92f92673b0d de3ec59e1671 2d0363e75992 c2c3b7d9691b a6ff8b1ad69d 210ee3a5306d 09ebcc23ba45 3511e37072b0]
	I0728 18:33:58.094737    4787 ssh_runner.go:195] Run: docker stop 0d0d94b0c44c bf22a96efc77 fc8f8514a17c 7749f4fd6625 8369608b1758 58e1b88fc31d c92f92673b0d de3ec59e1671 2d0363e75992 c2c3b7d9691b a6ff8b1ad69d 210ee3a5306d 09ebcc23ba45 3511e37072b0
	I0728 18:33:58.114149    4787 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 18:33:58.215868    4787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 18:33:58.219862    4787 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Jul 29 01:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Jul 29 01:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 29 01:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 29 01:33 /etc/kubernetes/scheduler.conf
	
	I0728 18:33:58.219894    4787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf
	I0728 18:33:58.222980    4787 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 18:33:58.223006    4787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0728 18:33:58.226522    4787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf
	I0728 18:33:58.229893    4787 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 18:33:58.229915    4787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0728 18:33:58.233228    4787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf
	I0728 18:33:58.236104    4787 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 18:33:58.236129    4787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 18:33:58.238910    4787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf
	I0728 18:33:58.241639    4787 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 18:33:58.241658    4787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 18:33:58.244293    4787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 18:33:58.246933    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:33:58.275421    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:33:58.672739    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:33:58.868504    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:33:58.893603    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:33:58.918425    4787 api_server.go:52] waiting for apiserver process to appear ...
	I0728 18:33:58.918503    4787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:33:59.420879    4787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:33:59.920928    4787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:34:00.420592    4787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:34:00.920560    4787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:34:00.924982    4787 api_server.go:72] duration metric: took 2.006560292s to wait for apiserver process to appear ...
	I0728 18:34:00.924998    4787 api_server.go:88] waiting for apiserver healthz status ...
	I0728 18:34:00.925007    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:34:05.927157    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:34:05.927198    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:34:10.927651    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:34:10.927726    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:34:15.928465    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:34:15.928486    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:34:20.929113    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:34:20.929178    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:34:25.930169    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:34:25.930284    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:34:30.932245    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:34:30.932316    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:34:35.934342    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:34:35.934457    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:34:40.937106    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:34:40.937153    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:34:45.938799    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:34:45.938882    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:34:50.941517    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:34:50.941586    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:34:55.943276    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:34:55.943350    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:35:00.945901    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:35:00.946184    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:35:00.973699    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:35:00.973822    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:35:00.991763    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:35:00.991853    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:35:01.004558    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:35:01.004635    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:35:01.015924    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:35:01.016003    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:35:01.026102    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:35:01.026167    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:35:01.036271    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:35:01.036340    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:35:01.048977    4787 logs.go:276] 0 containers: []
	W0728 18:35:01.048987    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:35:01.049040    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:35:01.059002    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:35:01.059028    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:35:01.059033    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:35:01.085109    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:35:01.085123    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:35:01.097046    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:35:01.097058    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:35:01.108235    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:35:01.108247    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:35:01.132456    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:35:01.132465    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:35:01.143741    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:35:01.143756    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:35:01.217777    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:35:01.217790    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:35:01.232734    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:35:01.232748    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:35:01.244456    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:35:01.244466    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:35:01.261990    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:35:01.262004    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:35:01.300315    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:35:01.300323    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:35:01.304599    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:35:01.304607    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:35:01.318406    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:35:01.318415    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:35:01.331440    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:35:01.331455    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:35:01.345631    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:35:01.345642    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:35:01.356796    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:35:01.356807    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:35:01.371060    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:35:01.371071    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:35:03.884505    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:35:08.885313    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:35:08.885511    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:35:08.905271    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:35:08.905357    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:35:08.917940    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:35:08.918045    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:35:08.929283    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:35:08.929355    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:35:08.940236    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:35:08.940310    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:35:08.950687    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:35:08.950752    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:35:08.961185    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:35:08.961252    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:35:08.980662    4787 logs.go:276] 0 containers: []
	W0728 18:35:08.980674    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:35:08.980725    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:35:08.990617    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:35:08.990633    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:35:08.990638    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:35:09.007714    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:35:09.007724    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:35:09.018797    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:35:09.018810    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:35:09.023552    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:35:09.023561    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:35:09.037302    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:35:09.037314    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:35:09.050989    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:35:09.051002    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:35:09.064978    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:35:09.064990    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:35:09.081737    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:35:09.081749    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:35:09.097575    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:35:09.097586    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:35:09.109831    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:35:09.109844    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:35:09.135141    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:35:09.135152    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:35:09.148221    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:35:09.148234    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:35:09.159874    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:35:09.159886    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:35:09.171630    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:35:09.171641    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:35:09.183464    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:35:09.183474    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:35:09.224162    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:35:09.224172    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:35:09.260869    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:35:09.260879    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:35:11.787307    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:35:16.789771    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:35:16.790105    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:35:16.829854    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:35:16.829984    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:35:16.856570    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:35:16.856642    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:35:16.874050    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:35:16.874145    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:35:16.885682    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:35:16.885750    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:35:16.896258    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:35:16.896313    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:35:16.907461    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:35:16.907533    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:35:16.917582    4787 logs.go:276] 0 containers: []
	W0728 18:35:16.917593    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:35:16.917646    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:35:16.928172    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:35:16.928195    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:35:16.928201    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:35:16.942642    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:35:16.942652    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:35:16.954281    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:35:16.954291    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:35:16.978975    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:35:16.978982    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:35:16.990567    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:35:16.990576    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:35:17.022700    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:35:17.022713    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:35:17.036997    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:35:17.037006    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:35:17.048195    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:35:17.048210    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:35:17.059306    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:35:17.059316    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:35:17.095145    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:35:17.095156    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:35:17.113501    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:35:17.113515    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:35:17.125001    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:35:17.125015    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:35:17.163380    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:35:17.163388    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:35:17.167341    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:35:17.167349    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:35:17.181314    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:35:17.181327    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:35:17.195085    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:35:17.195099    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:35:17.206676    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:35:17.206688    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:35:19.719710    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:35:24.722601    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:35:24.723022    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:35:24.764808    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:35:24.764953    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:35:24.787184    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:35:24.787303    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:35:24.801793    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:35:24.801870    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:35:24.814404    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:35:24.814474    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:35:24.825452    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:35:24.825517    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:35:24.836347    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:35:24.836411    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:35:24.848519    4787 logs.go:276] 0 containers: []
	W0728 18:35:24.848532    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:35:24.848595    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:35:24.859366    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:35:24.859384    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:35:24.859389    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:35:24.873207    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:35:24.873222    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:35:24.878026    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:35:24.878034    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:35:24.895543    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:35:24.895554    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:35:24.909868    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:35:24.909881    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:35:24.924308    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:35:24.924319    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:35:24.948493    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:35:24.948502    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:35:24.983146    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:35:24.983158    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:35:24.997910    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:35:24.997921    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:35:25.025756    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:35:25.025766    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:35:25.041208    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:35:25.041218    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:35:25.052991    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:35:25.053004    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:35:25.070789    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:35:25.070802    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:35:25.112545    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:35:25.112557    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:35:25.127087    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:35:25.127101    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:35:25.141569    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:35:25.141579    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:35:25.153432    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:35:25.153445    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:35:27.667007    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:35:32.669781    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:35:32.670065    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:35:32.702156    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:35:32.702283    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:35:32.726628    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:35:32.726732    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:35:32.742944    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:35:32.743027    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:35:32.756522    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:35:32.756593    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:35:32.767737    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:35:32.767803    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:35:32.778567    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:35:32.778629    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:35:32.789034    4787 logs.go:276] 0 containers: []
	W0728 18:35:32.789046    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:35:32.789098    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:35:32.799600    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:35:32.799619    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:35:32.799624    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:35:32.811312    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:35:32.811326    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:35:32.815604    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:35:32.815613    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:35:32.830034    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:35:32.830047    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:35:32.844077    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:35:32.844086    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:35:32.868052    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:35:32.868064    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:35:32.883687    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:35:32.883697    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:35:32.895276    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:35:32.895286    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:35:32.935638    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:35:32.935648    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:35:32.960119    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:35:32.960131    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:35:32.980665    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:35:32.980678    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:35:32.994026    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:35:32.994038    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:35:33.006103    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:35:33.006114    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:35:33.021318    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:35:33.021328    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:35:33.060328    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:35:33.060340    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:35:33.073678    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:35:33.073689    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:35:33.085560    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:35:33.085573    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:35:35.605124    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:35:40.607894    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:35:40.608407    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:35:40.646187    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:35:40.646324    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:35:40.666114    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:35:40.666210    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:35:40.680104    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:35:40.680183    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:35:40.692245    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:35:40.692310    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:35:40.702460    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:35:40.702530    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:35:40.712886    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:35:40.712947    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:35:40.722519    4787 logs.go:276] 0 containers: []
	W0728 18:35:40.722532    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:35:40.722580    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:35:40.736747    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:35:40.736763    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:35:40.736770    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:35:40.750610    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:35:40.750624    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:35:40.765056    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:35:40.765066    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:35:40.776330    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:35:40.776342    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:35:40.790330    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:35:40.790341    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:35:40.801824    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:35:40.801833    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:35:40.819250    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:35:40.819259    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:35:40.830293    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:35:40.830304    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:35:40.871320    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:35:40.871331    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:35:40.888800    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:35:40.888811    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:35:40.901421    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:35:40.901433    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:35:40.914392    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:35:40.914405    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:35:40.918844    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:35:40.918852    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:35:40.952887    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:35:40.952899    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:35:40.981048    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:35:40.981058    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:35:40.995244    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:35:40.995254    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:35:41.006591    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:35:41.006602    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:35:43.537070    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:35:48.538661    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:35:48.539174    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:35:48.580883    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:35:48.581027    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:35:48.602811    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:35:48.602933    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:35:48.618065    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:35:48.618143    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:35:48.630526    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:35:48.630593    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:35:48.641594    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:35:48.641655    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:35:48.651864    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:35:48.651924    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:35:48.669242    4787 logs.go:276] 0 containers: []
	W0728 18:35:48.669254    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:35:48.669301    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:35:48.680394    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:35:48.680411    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:35:48.680416    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:35:48.719688    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:35:48.719698    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:35:48.723833    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:35:48.723839    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:35:48.737194    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:35:48.737204    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:35:48.749126    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:35:48.749137    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:35:48.785291    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:35:48.785306    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:35:48.799569    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:35:48.799587    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:35:48.813465    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:35:48.813476    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:35:48.830558    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:35:48.830568    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:35:48.841915    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:35:48.841925    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:35:48.866421    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:35:48.866429    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:35:48.880329    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:35:48.880338    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:35:48.891424    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:35:48.891441    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:35:48.907982    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:35:48.907997    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:35:48.935107    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:35:48.935121    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:35:48.946591    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:35:48.946601    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:35:48.961935    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:35:48.961945    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:35:51.476403    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:35:56.479110    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:35:56.479535    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:35:56.520222    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:35:56.520355    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:35:56.542442    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:35:56.542574    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:35:56.557363    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:35:56.557435    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:35:56.572303    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:35:56.572372    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:35:56.583540    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:35:56.583598    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:35:56.594177    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:35:56.594233    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:35:56.604816    4787 logs.go:276] 0 containers: []
	W0728 18:35:56.604829    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:35:56.604886    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:35:56.615350    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:35:56.615368    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:35:56.615374    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:35:56.653332    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:35:56.653342    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:35:56.672052    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:35:56.672065    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:35:56.683593    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:35:56.683605    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:35:56.697804    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:35:56.697817    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:35:56.709480    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:35:56.709493    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:35:56.720817    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:35:56.720827    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:35:56.732980    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:35:56.732993    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:35:56.768349    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:35:56.768363    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:35:56.794584    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:35:56.794594    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:35:56.809206    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:35:56.809217    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:35:56.822329    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:35:56.822338    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:35:56.826552    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:35:56.826558    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:35:56.841460    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:35:56.841472    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:35:56.853759    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:35:56.853773    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:35:56.871167    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:35:56.871184    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:35:56.888608    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:35:56.888619    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:35:59.416213    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:36:04.418963    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:36:04.419144    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:36:04.440108    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:36:04.440193    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:36:04.456284    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:36:04.456364    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:36:04.468441    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:36:04.468513    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:36:04.479395    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:36:04.479467    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:36:04.489666    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:36:04.489725    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:36:04.500243    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:36:04.500309    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:36:04.510108    4787 logs.go:276] 0 containers: []
	W0728 18:36:04.510122    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:36:04.510174    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:36:04.520219    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:36:04.520236    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:36:04.520241    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:36:04.544062    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:36:04.544071    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:36:04.580939    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:36:04.580954    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:36:04.599997    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:36:04.600016    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:36:04.615427    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:36:04.615443    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:36:04.628984    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:36:04.628996    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:36:04.656957    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:36:04.656977    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:36:04.670251    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:36:04.670269    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:36:04.682835    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:36:04.682848    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:36:04.725020    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:36:04.725031    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:36:04.740385    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:36:04.740395    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:36:04.765118    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:36:04.765131    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:36:04.781870    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:36:04.781880    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:36:04.793550    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:36:04.793561    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:36:04.805791    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:36:04.805802    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:36:04.810370    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:36:04.810376    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:36:04.824937    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:36:04.824947    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:36:07.338145    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:36:12.340680    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:36:12.341089    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:36:12.377903    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:36:12.378024    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:36:12.398133    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:36:12.398223    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:36:12.422456    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:36:12.422535    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:36:12.434152    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:36:12.434222    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:36:12.444566    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:36:12.444622    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:36:12.455051    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:36:12.455123    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:36:12.465254    4787 logs.go:276] 0 containers: []
	W0728 18:36:12.465265    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:36:12.465318    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:36:12.476044    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:36:12.476062    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:36:12.476068    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:36:12.510151    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:36:12.510165    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:36:12.524595    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:36:12.524608    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:36:12.544666    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:36:12.544677    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:36:12.549460    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:36:12.549467    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:36:12.580425    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:36:12.580435    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:36:12.592400    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:36:12.592414    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:36:12.612599    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:36:12.612610    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:36:12.626482    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:36:12.626496    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:36:12.638703    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:36:12.638715    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:36:12.649979    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:36:12.649991    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:36:12.662324    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:36:12.662336    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:36:12.702704    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:36:12.702713    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:36:12.716241    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:36:12.716251    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:36:12.729964    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:36:12.729974    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:36:12.741599    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:36:12.741611    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:36:12.753288    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:36:12.753299    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:36:15.278934    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:36:20.281342    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:36:20.281787    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:36:20.319818    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:36:20.319948    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:36:20.341358    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:36:20.341479    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:36:20.358292    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:36:20.358376    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:36:20.370821    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:36:20.370887    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:36:20.381596    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:36:20.381665    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:36:20.392030    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:36:20.392097    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:36:20.402834    4787 logs.go:276] 0 containers: []
	W0728 18:36:20.402844    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:36:20.402896    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:36:20.413447    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:36:20.413471    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:36:20.413477    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:36:20.418111    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:36:20.418119    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:36:20.437498    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:36:20.437510    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:36:20.449014    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:36:20.449026    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:36:20.460647    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:36:20.460657    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:36:20.476519    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:36:20.476532    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:36:20.516316    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:36:20.516328    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:36:20.550935    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:36:20.550945    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:36:20.567152    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:36:20.567169    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:36:20.594709    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:36:20.594723    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:36:20.606870    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:36:20.606885    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:36:20.620536    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:36:20.620545    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:36:20.638012    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:36:20.638030    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:36:20.649751    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:36:20.649761    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:36:20.661078    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:36:20.661087    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:36:20.675201    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:36:20.675216    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:36:20.686821    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:36:20.686830    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:36:23.213532    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:36:28.215924    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0728 18:36:28.216047    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:36:28.227813    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:36:28.227885    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:36:28.243958    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:36:28.244033    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:36:28.254564    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:36:28.254636    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:36:28.265732    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:36:28.265801    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:36:28.276291    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:36:28.276353    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:36:28.286893    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:36:28.286968    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:36:28.297066    4787 logs.go:276] 0 containers: []
	W0728 18:36:28.297079    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:36:28.297134    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:36:28.307927    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:36:28.307944    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:36:28.307950    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:36:28.344688    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:36:28.344699    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:36:28.359090    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:36:28.359101    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:36:28.370731    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:36:28.370744    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:36:28.388634    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:36:28.388645    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:36:28.400599    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:36:28.400610    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:36:28.412110    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:36:28.412121    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:36:28.456177    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:36:28.456197    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:36:28.484476    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:36:28.484498    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:36:28.490100    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:36:28.490112    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:36:28.506830    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:36:28.506844    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:36:28.521078    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:36:28.521089    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:36:28.534649    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:36:28.534666    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:36:28.548222    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:36:28.548235    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:36:28.564934    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:36:28.564946    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:36:28.592350    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:36:28.592377    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:36:28.606753    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:36:28.606770    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:36:31.124956    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:36:36.126358    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:36:36.126737    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:36:36.163209    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:36:36.163344    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:36:36.183824    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:36:36.183918    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:36:36.198109    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:36:36.198196    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:36:36.210658    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:36:36.210730    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:36:36.222174    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:36:36.222245    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:36:36.239436    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:36:36.239513    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:36:36.249698    4787 logs.go:276] 0 containers: []
	W0728 18:36:36.249709    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:36:36.249763    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:36:36.260479    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:36:36.260494    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:36:36.260499    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:36:36.275084    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:36:36.275095    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:36:36.288257    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:36:36.288268    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:36:36.299406    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:36:36.299417    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:36:36.322676    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:36:36.322683    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:36:36.360998    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:36:36.361006    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:36:36.384446    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:36:36.384457    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:36:36.399299    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:36:36.399312    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:36:36.416950    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:36:36.416959    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:36:36.431999    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:36:36.432010    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:36:36.436842    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:36:36.436848    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:36:36.471448    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:36:36.471462    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:36:36.485715    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:36:36.485726    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:36:36.504324    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:36:36.504335    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:36:36.520352    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:36:36.520366    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:36:36.532894    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:36:36.532904    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:36:36.545923    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:36:36.545934    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:36:39.058163    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:36:44.060362    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:36:44.060470    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:36:44.072281    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:36:44.072360    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:36:44.083731    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:36:44.083818    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:36:44.094638    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:36:44.094708    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:36:44.105706    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:36:44.105780    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:36:44.126725    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:36:44.126786    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:36:44.138265    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:36:44.138337    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:36:44.149005    4787 logs.go:276] 0 containers: []
	W0728 18:36:44.149016    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:36:44.149075    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:36:44.159542    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:36:44.159561    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:36:44.159566    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:36:44.170898    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:36:44.170909    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:36:44.185978    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:36:44.185990    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:36:44.204015    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:36:44.204034    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:36:44.216319    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:36:44.216331    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:36:44.242731    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:36:44.242750    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:36:44.280895    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:36:44.280907    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:36:44.296315    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:36:44.296331    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:36:44.326371    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:36:44.326384    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:36:44.352675    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:36:44.352692    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:36:44.369537    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:36:44.369550    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:36:44.388971    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:36:44.388983    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:36:44.401914    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:36:44.401926    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:36:44.417508    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:36:44.417519    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:36:44.430094    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:36:44.430106    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:36:44.473191    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:36:44.473207    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:36:44.478132    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:36:44.478139    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:36:46.994554    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:36:51.996851    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:36:51.996983    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:36:52.008746    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:36:52.008825    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:36:52.021024    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:36:52.021106    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:36:52.032298    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:36:52.032377    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:36:52.043640    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:36:52.043717    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:36:52.054297    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:36:52.054369    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:36:52.064997    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:36:52.065069    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:36:52.076318    4787 logs.go:276] 0 containers: []
	W0728 18:36:52.076333    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:36:52.076391    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:36:52.088016    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:36:52.088034    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:36:52.088040    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:36:52.112053    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:36:52.112071    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:36:52.128416    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:36:52.128434    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:36:52.143934    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:36:52.143951    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:36:52.156653    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:36:52.156668    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:36:52.169879    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:36:52.169892    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:36:52.213585    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:36:52.213600    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:36:52.249502    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:36:52.249519    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:36:52.263924    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:36:52.263937    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:36:52.275194    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:36:52.275207    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:36:52.279423    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:36:52.279432    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:36:52.303086    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:36:52.303097    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:36:52.315964    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:36:52.315976    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:36:52.327358    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:36:52.327370    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:36:52.339229    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:36:52.339240    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:36:52.353112    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:36:52.353123    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:36:52.371045    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:36:52.371055    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:36:54.890302    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:36:59.891424    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:36:59.891604    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:36:59.911411    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:36:59.911505    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:36:59.929763    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:36:59.929830    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:36:59.941084    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:36:59.941149    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:36:59.951450    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:36:59.951519    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:36:59.967936    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:36:59.967997    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:36:59.978701    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:36:59.978755    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:36:59.988475    4787 logs.go:276] 0 containers: []
	W0728 18:36:59.988487    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:36:59.988532    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:36:59.999126    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:36:59.999145    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:36:59.999151    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:37:00.033458    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:37:00.033470    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:37:00.057470    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:37:00.057484    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:37:00.071758    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:37:00.071767    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:37:00.089243    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:37:00.089253    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:37:00.103932    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:37:00.103942    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:37:00.127028    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:37:00.127037    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:37:00.138593    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:37:00.138606    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:37:00.176836    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:37:00.176847    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:37:00.190477    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:37:00.190489    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:37:00.204345    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:37:00.204357    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:37:00.216209    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:37:00.216218    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:37:00.228969    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:37:00.228980    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:37:00.247214    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:37:00.247225    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:37:00.251433    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:37:00.251439    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:37:00.262401    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:37:00.262411    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:37:00.273578    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:37:00.273592    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:37:02.786915    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:07.787842    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:07.787967    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:37:07.800123    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:37:07.800199    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:37:07.811541    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:37:07.811626    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:37:07.822213    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:37:07.822280    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:37:07.833164    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:37:07.833232    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:37:07.843722    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:37:07.843781    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:37:07.854455    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:37:07.854516    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:37:07.867089    4787 logs.go:276] 0 containers: []
	W0728 18:37:07.867103    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:37:07.867161    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:37:07.882453    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:37:07.882475    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:37:07.882480    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:37:07.905364    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:37:07.905372    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:37:07.919943    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:37:07.919954    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:37:07.937523    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:37:07.937535    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:37:07.952573    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:37:07.952583    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:37:07.966672    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:37:07.966683    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:37:07.977921    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:37:07.977935    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:37:07.992635    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:37:07.992644    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:37:08.005264    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:37:08.005275    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:37:08.023614    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:37:08.023625    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:37:08.028481    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:37:08.028488    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:37:08.042663    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:37:08.042673    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:37:08.067377    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:37:08.067387    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:37:08.080317    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:37:08.080328    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:37:08.120783    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:37:08.120792    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:37:08.157138    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:37:08.157147    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:37:08.169309    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:37:08.169320    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:37:10.683484    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:15.685037    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:15.685513    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:37:15.727722    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:37:15.727904    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:37:15.756625    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:37:15.756713    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:37:15.774139    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:37:15.774215    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:37:15.785661    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:37:15.785721    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:37:15.800256    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:37:15.800333    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:37:15.812339    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:37:15.812410    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:37:15.822441    4787 logs.go:276] 0 containers: []
	W0728 18:37:15.822452    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:37:15.822505    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:37:15.836898    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:37:15.836917    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:37:15.836923    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:37:15.877975    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:37:15.877985    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:37:15.891274    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:37:15.891286    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:37:15.915538    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:37:15.915550    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:37:15.929679    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:37:15.929693    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:37:15.944501    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:37:15.944512    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:37:15.959644    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:37:15.959659    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:37:15.971670    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:37:15.971683    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:37:15.983594    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:37:15.983608    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:37:15.988285    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:37:15.988294    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:37:16.002791    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:37:16.002802    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:37:16.014483    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:37:16.014497    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:37:16.026301    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:37:16.026314    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:37:16.049783    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:37:16.049791    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:37:16.062332    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:37:16.062342    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:37:16.079864    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:37:16.079874    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:37:16.114356    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:37:16.114367    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:37:18.628482    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:23.631098    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:23.631204    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:37:23.644843    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:37:23.644928    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:37:23.656358    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:37:23.656436    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:37:23.668226    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:37:23.668295    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:37:23.687119    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:37:23.687198    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:37:23.701407    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:37:23.701587    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:37:23.715165    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:37:23.715236    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:37:23.728307    4787 logs.go:276] 0 containers: []
	W0728 18:37:23.728321    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:37:23.728382    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:37:23.741064    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:37:23.741083    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:37:23.741089    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:37:23.746320    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:37:23.746332    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:37:23.761684    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:37:23.761700    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:37:23.774552    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:37:23.774563    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:37:23.801535    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:37:23.801556    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:37:23.846174    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:37:23.846193    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:37:23.893395    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:37:23.893412    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:37:23.920948    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:37:23.920974    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:37:23.936784    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:37:23.936796    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:37:23.952475    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:37:23.952492    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:37:23.971443    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:37:23.971458    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:37:23.984477    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:37:23.984495    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:37:24.001267    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:37:24.001285    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:37:24.019664    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:37:24.019682    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:37:24.032628    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:37:24.032640    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:37:24.046245    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:37:24.046257    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:37:24.059452    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:37:24.059466    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:37:26.574779    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:31.576934    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:31.577033    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:37:31.588141    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:37:31.588209    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:37:31.599051    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:37:31.599122    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:37:31.609931    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:37:31.609994    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:37:31.621792    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:37:31.621864    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:37:31.637682    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:37:31.637753    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:37:31.648927    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:37:31.649003    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:37:31.659472    4787 logs.go:276] 0 containers: []
	W0728 18:37:31.659488    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:37:31.659551    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:37:31.678469    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:37:31.678487    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:37:31.678492    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:37:31.703160    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:37:31.703175    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:37:31.717863    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:37:31.717875    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:37:31.735893    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:37:31.735904    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:37:31.748264    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:37:31.748276    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:37:31.762929    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:37:31.762939    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:37:31.775349    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:37:31.775361    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:37:31.787831    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:37:31.787845    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:37:31.802753    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:37:31.802767    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:37:31.814653    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:37:31.814663    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:37:31.819422    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:37:31.819431    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:37:31.838086    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:37:31.838097    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:37:31.849726    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:37:31.849738    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:37:31.862241    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:37:31.862253    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:37:31.906533    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:37:31.906550    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:37:31.946017    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:37:31.946036    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:37:31.973543    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:37:31.973565    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:37:34.488223    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:39.490501    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:39.490629    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:37:39.502040    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:37:39.502104    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:37:39.512930    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:37:39.513008    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:37:39.523438    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:37:39.523504    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:37:39.534162    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:37:39.534237    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:37:39.545101    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:37:39.545171    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:37:39.555211    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:37:39.555276    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:37:39.565469    4787 logs.go:276] 0 containers: []
	W0728 18:37:39.565487    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:37:39.565539    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:37:39.575777    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:37:39.575792    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:37:39.575797    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:37:39.587070    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:37:39.587080    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:37:39.625560    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:37:39.625570    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:37:39.629928    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:37:39.629934    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:37:39.641302    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:37:39.641314    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:37:39.659161    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:37:39.659175    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:37:39.682230    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:37:39.682243    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:37:39.693813    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:37:39.693822    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:37:39.705311    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:37:39.705321    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:37:39.727659    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:37:39.727667    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:37:39.739490    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:37:39.739502    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:37:39.753582    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:37:39.753593    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:37:39.767722    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:37:39.767731    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:37:39.782230    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:37:39.782240    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:37:39.793658    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:37:39.793669    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:37:39.829041    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:37:39.829056    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:37:39.844454    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:37:39.844465    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:37:42.357522    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:47.359696    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:47.359855    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:37:47.371578    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:37:47.371651    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:37:47.381416    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:37:47.381495    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:37:47.392198    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:37:47.392261    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:37:47.402704    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:37:47.402768    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:37:47.412750    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:37:47.412818    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:37:47.423147    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:37:47.423210    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:37:47.435597    4787 logs.go:276] 0 containers: []
	W0728 18:37:47.435609    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:37:47.435663    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:37:47.446145    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:37:47.446164    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:37:47.446171    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:37:47.460312    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:37:47.460322    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:37:47.474687    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:37:47.474700    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:37:47.487970    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:37:47.487983    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:37:47.527834    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:37:47.527848    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:37:47.565890    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:37:47.565902    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:37:47.587836    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:37:47.587844    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:37:47.591925    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:37:47.591932    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:37:47.603372    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:37:47.603383    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:37:47.614429    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:37:47.614440    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:37:47.626723    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:37:47.626735    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:37:47.638794    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:37:47.638805    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:37:47.651222    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:37:47.651233    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:37:47.662891    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:37:47.662902    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:37:47.677030    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:37:47.677039    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:37:47.698715    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:37:47.698725    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:37:47.723190    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:37:47.723204    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:37:50.239423    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:55.241718    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:55.241873    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:37:55.253726    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:37:55.253806    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:37:55.264636    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:37:55.264707    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:37:55.275150    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:37:55.275215    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:37:55.285769    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:37:55.285839    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:37:55.301276    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:37:55.301349    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:37:55.312304    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:37:55.312369    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:37:55.328078    4787 logs.go:276] 0 containers: []
	W0728 18:37:55.328091    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:37:55.328148    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:37:55.338469    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:37:55.338492    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:37:55.338499    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:37:55.352954    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:37:55.352964    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:37:55.376096    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:37:55.376104    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:37:55.399369    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:37:55.399380    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:37:55.410916    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:37:55.410927    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:37:55.426100    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:37:55.426110    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:37:55.430357    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:37:55.430365    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:37:55.441519    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:37:55.441528    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:37:55.458914    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:37:55.458923    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:37:55.470550    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:37:55.470560    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:37:55.483968    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:37:55.483977    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:37:55.520100    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:37:55.520109    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:37:55.534302    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:37:55.534314    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:37:55.545722    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:37:55.545733    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:37:55.557695    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:37:55.557706    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:37:55.569645    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:37:55.569657    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:37:55.581720    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:37:55.581730    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:37:58.123642    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:03.126042    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:03.126117    4787 kubeadm.go:597] duration metric: took 4m5.049564458s to restartPrimaryControlPlane
	W0728 18:38:03.126186    4787 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0728 18:38:03.126216    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0728 18:38:04.085474    4787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:38:04.090573    4787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 18:38:04.093417    4787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 18:38:04.096224    4787 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 18:38:04.096230    4787 kubeadm.go:157] found existing configuration files:
	
	I0728 18:38:04.096257    4787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf
	I0728 18:38:04.098675    4787 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0728 18:38:04.098698    4787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0728 18:38:04.101432    4787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf
	I0728 18:38:04.104530    4787 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0728 18:38:04.104549    4787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0728 18:38:04.107359    4787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf
	I0728 18:38:04.109684    4787 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0728 18:38:04.109707    4787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 18:38:04.112752    4787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf
	I0728 18:38:04.115458    4787 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0728 18:38:04.115479    4787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 18:38:04.117892    4787 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0728 18:38:04.133914    4787 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0728 18:38:04.133974    4787 kubeadm.go:310] [preflight] Running pre-flight checks
	I0728 18:38:04.183700    4787 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0728 18:38:04.183757    4787 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0728 18:38:04.183811    4787 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0728 18:38:04.233404    4787 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0728 18:38:04.237597    4787 out.go:204]   - Generating certificates and keys ...
	I0728 18:38:04.237657    4787 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0728 18:38:04.237699    4787 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0728 18:38:04.237808    4787 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0728 18:38:04.237898    4787 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0728 18:38:04.237956    4787 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0728 18:38:04.237984    4787 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0728 18:38:04.238042    4787 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0728 18:38:04.238078    4787 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0728 18:38:04.238182    4787 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0728 18:38:04.238281    4787 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0728 18:38:04.238329    4787 kubeadm.go:310] [certs] Using the existing "sa" key
	I0728 18:38:04.238434    4787 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0728 18:38:04.382907    4787 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0728 18:38:04.465069    4787 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0728 18:38:04.618178    4787 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0728 18:38:04.758063    4787 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0728 18:38:04.786162    4787 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0728 18:38:04.786543    4787 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0728 18:38:04.786642    4787 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0728 18:38:04.877102    4787 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0728 18:38:04.880372    4787 out.go:204]   - Booting up control plane ...
	I0728 18:38:04.880434    4787 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0728 18:38:04.880484    4787 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0728 18:38:04.880550    4787 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0728 18:38:04.880624    4787 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0728 18:38:04.881327    4787 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0728 18:38:09.382904    4787 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501411 seconds
	I0728 18:38:09.383050    4787 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0728 18:38:09.387549    4787 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0728 18:38:09.905403    4787 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0728 18:38:09.905718    4787 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-638000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0728 18:38:10.409417    4787 kubeadm.go:310] [bootstrap-token] Using token: k7ek6g.vvicwoh071co5a96
	I0728 18:38:10.411871    4787 out.go:204]   - Configuring RBAC rules ...
	I0728 18:38:10.411930    4787 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0728 18:38:10.411974    4787 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0728 18:38:10.416564    4787 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0728 18:38:10.417602    4787 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0728 18:38:10.418567    4787 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0728 18:38:10.419517    4787 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0728 18:38:10.428713    4787 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0728 18:38:10.622681    4787 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0728 18:38:10.813056    4787 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0728 18:38:10.813449    4787 kubeadm.go:310] 
	I0728 18:38:10.813483    4787 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0728 18:38:10.813486    4787 kubeadm.go:310] 
	I0728 18:38:10.813524    4787 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0728 18:38:10.813527    4787 kubeadm.go:310] 
	I0728 18:38:10.813539    4787 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0728 18:38:10.813576    4787 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0728 18:38:10.813604    4787 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0728 18:38:10.813609    4787 kubeadm.go:310] 
	I0728 18:38:10.813637    4787 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0728 18:38:10.813641    4787 kubeadm.go:310] 
	I0728 18:38:10.813663    4787 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0728 18:38:10.813669    4787 kubeadm.go:310] 
	I0728 18:38:10.813699    4787 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0728 18:38:10.813739    4787 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0728 18:38:10.813784    4787 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0728 18:38:10.813787    4787 kubeadm.go:310] 
	I0728 18:38:10.813831    4787 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0728 18:38:10.813872    4787 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0728 18:38:10.813875    4787 kubeadm.go:310] 
	I0728 18:38:10.813924    4787 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k7ek6g.vvicwoh071co5a96 \
	I0728 18:38:10.813981    4787 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c4c1501be84d6e769376a12e79a88eb62c7fa74cf7059e57b30ba292796da81b \
	I0728 18:38:10.813993    4787 kubeadm.go:310] 	--control-plane 
	I0728 18:38:10.813996    4787 kubeadm.go:310] 
	I0728 18:38:10.814044    4787 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0728 18:38:10.814048    4787 kubeadm.go:310] 
	I0728 18:38:10.814091    4787 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k7ek6g.vvicwoh071co5a96 \
	I0728 18:38:10.814155    4787 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c4c1501be84d6e769376a12e79a88eb62c7fa74cf7059e57b30ba292796da81b 
	I0728 18:38:10.814214    4787 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0728 18:38:10.814234    4787 cni.go:84] Creating CNI manager for ""
	I0728 18:38:10.814242    4787 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:38:10.821210    4787 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0728 18:38:10.825291    4787 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0728 18:38:10.828264    4787 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0728 18:38:10.833099    4787 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 18:38:10.833138    4787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:38:10.833173    4787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-638000 minikube.k8s.io/updated_at=2024_07_28T18_38_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=running-upgrade-638000 minikube.k8s.io/primary=true
	I0728 18:38:10.883913    4787 ops.go:34] apiserver oom_adj: -16
	I0728 18:38:10.883928    4787 kubeadm.go:1113] duration metric: took 50.823042ms to wait for elevateKubeSystemPrivileges
	I0728 18:38:10.883935    4787 kubeadm.go:394] duration metric: took 4m12.8210955s to StartCluster
	I0728 18:38:10.883945    4787 settings.go:142] acquiring lock: {Name:mk87b264018a6cee2b66b065d01a79c5a5adf3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:38:10.884046    4787 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:38:10.884420    4787 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/kubeconfig: {Name:mk193de249a2c701b098e889c731f2b64761e39f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:38:10.884638    4787 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:38:10.884650    4787 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0728 18:38:10.884688    4787 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-638000"
	I0728 18:38:10.884700    4787 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-638000"
	W0728 18:38:10.884707    4787 addons.go:243] addon storage-provisioner should already be in state true
	I0728 18:38:10.884718    4787 host.go:66] Checking if "running-upgrade-638000" exists ...
	I0728 18:38:10.884731    4787 config.go:182] Loaded profile config "running-upgrade-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:38:10.884741    4787 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-638000"
	I0728 18:38:10.884769    4787 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-638000"
	I0728 18:38:10.884982    4787 retry.go:31] will retry after 1.026558627s: connect: dial unix /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/running-upgrade-638000/monitor: connect: connection refused
	I0728 18:38:10.885725    4787 kapi.go:59] client config for running-upgrade-638000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10242c5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:38:10.885880    4787 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-638000"
	W0728 18:38:10.885887    4787 addons.go:243] addon default-storageclass should already be in state true
	I0728 18:38:10.885894    4787 host.go:66] Checking if "running-upgrade-638000" exists ...
	I0728 18:38:10.886408    4787 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 18:38:10.886413    4787 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 18:38:10.886418    4787 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/running-upgrade-638000/id_rsa Username:docker}
	I0728 18:38:10.888270    4787 out.go:177] * Verifying Kubernetes components...
	I0728 18:38:10.896273    4787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:38:10.989127    4787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:38:10.994737    4787 api_server.go:52] waiting for apiserver process to appear ...
	I0728 18:38:10.994781    4787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:38:10.998723    4787 api_server.go:72] duration metric: took 114.074583ms to wait for apiserver process to appear ...
	I0728 18:38:10.998732    4787 api_server.go:88] waiting for apiserver healthz status ...
	I0728 18:38:10.998740    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:11.062706    4787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 18:38:11.918756    4787 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:38:11.921684    4787 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 18:38:11.921692    4787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 18:38:11.921705    4787 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/running-upgrade-638000/id_rsa Username:docker}
	I0728 18:38:11.966421    4787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 18:38:16.000849    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:16.000888    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:21.001227    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:21.001279    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:25.996810    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:25.996863    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:30.988364    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:30.988421    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:35.982505    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:35.982549    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:40.978954    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:40.978987    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0728 18:38:41.364594    4787 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0728 18:38:41.368752    4787 out.go:177] * Enabled addons: storage-provisioner
	I0728 18:38:41.380719    4787 addons.go:510] duration metric: took 30.521652792s for enable addons: enabled=[storage-provisioner]
	I0728 18:38:45.976673    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:45.976722    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:50.975690    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:50.975740    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:55.976011    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:55.976078    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:00.977100    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:00.977151    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:05.978440    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:05.978492    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:10.979327    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:10.979486    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:11.013365    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:39:11.013441    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:11.035727    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:39:11.035797    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:11.046532    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:39:11.046606    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:11.057320    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:39:11.057397    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:11.068915    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:39:11.068987    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:11.080034    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:39:11.080104    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:11.090520    4787 logs.go:276] 0 containers: []
	W0728 18:39:11.090532    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:11.090590    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:11.101216    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:39:11.101230    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:39:11.101236    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:39:11.116207    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:39:11.116220    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:39:11.128242    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:39:11.128252    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:39:11.139876    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:39:11.139888    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:39:11.154738    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:39:11.154751    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:39:11.168006    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:39:11.168017    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:39:11.180343    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:11.180354    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:11.214247    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:11.214257    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:11.218725    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:11.218731    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:11.304251    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:39:11.304266    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:39:11.318674    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:39:11.318688    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:39:11.336430    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:11.336440    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:11.359986    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:39:11.359995    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:13.881318    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:18.883199    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:18.883367    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:18.898965    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:39:18.899046    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:18.913538    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:39:18.913612    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:18.924297    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:39:18.924352    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:18.934869    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:39:18.934938    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:18.945632    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:39:18.945700    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:18.956429    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:39:18.956496    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:18.967418    4787 logs.go:276] 0 containers: []
	W0728 18:39:18.967429    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:18.967479    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:18.979148    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:39:18.979163    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:18.979170    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:18.983473    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:18.983481    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:19.024639    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:39:19.024649    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:39:19.036268    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:39:19.036278    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:39:19.051474    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:19.051485    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:19.074871    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:39:19.074882    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:39:19.088503    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:39:19.088515    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:19.100406    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:19.100416    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:19.134776    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:39:19.134786    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:39:19.149076    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:39:19.149087    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:39:19.169974    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:39:19.169985    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:39:19.181560    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:39:19.181570    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:39:19.193255    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:39:19.193266    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:39:21.712573    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:26.714884    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:26.715077    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:26.738589    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:39:26.738690    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:26.755693    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:39:26.755764    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:26.779288    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:39:26.779366    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:26.791111    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:39:26.791179    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:26.802266    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:39:26.802342    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:26.813472    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:39:26.813538    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:26.823924    4787 logs.go:276] 0 containers: []
	W0728 18:39:26.823938    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:26.823988    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:26.835308    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:39:26.835323    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:39:26.835329    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:26.847584    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:26.847596    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:26.883102    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:26.883111    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:26.887976    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:26.887984    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:26.924619    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:39:26.924631    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:39:26.937421    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:39:26.937434    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:39:26.951750    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:39:26.951764    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:39:26.966070    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:39:26.966084    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:39:26.982152    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:39:26.982166    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:39:26.999350    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:39:26.999366    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:39:27.015908    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:39:27.015924    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:39:27.037275    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:39:27.037288    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:39:27.050235    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:27.050249    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:29.577193    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:34.579386    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:34.579726    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:34.615717    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:39:34.615847    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:34.635633    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:39:34.635730    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:34.651449    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:39:34.651529    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:34.667716    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:39:34.667790    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:34.684403    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:39:34.684473    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:34.695713    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:39:34.695786    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:34.707315    4787 logs.go:276] 0 containers: []
	W0728 18:39:34.707326    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:34.707379    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:34.718918    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:39:34.718932    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:39:34.718937    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:39:34.730826    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:39:34.730838    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:39:34.743625    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:39:34.743635    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:39:34.761710    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:39:34.761722    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:39:34.775578    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:34.775591    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:34.811924    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:34.811934    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:34.816717    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:39:34.816723    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:39:34.831052    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:39:34.831065    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:39:34.846795    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:34.846808    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:34.870259    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:39:34.870266    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:34.881948    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:34.881959    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:34.920872    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:39:34.920883    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:39:34.938570    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:39:34.938580    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:39:37.458356    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:42.460473    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:42.460683    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:42.479244    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:39:42.479336    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:42.493442    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:39:42.493516    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:42.505165    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:39:42.505229    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:42.516202    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:39:42.516270    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:42.526999    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:39:42.527066    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:42.537853    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:39:42.537912    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:42.548345    4787 logs.go:276] 0 containers: []
	W0728 18:39:42.548366    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:42.548428    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:42.559637    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:39:42.559652    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:42.559658    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:42.583193    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:42.583202    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:42.588042    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:39:42.588049    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:39:42.602759    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:39:42.602771    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:39:42.614973    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:39:42.614983    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:39:42.633926    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:39:42.633936    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:39:42.649742    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:39:42.649752    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:39:42.663272    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:42.663286    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:42.697192    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:42.697201    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:42.735137    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:39:42.735148    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:39:42.749298    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:39:42.749312    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:39:42.761518    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:39:42.761532    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:39:42.779677    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:39:42.779688    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:45.293784    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:50.295940    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:50.296117    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:50.311765    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:39:50.311849    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:50.324491    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:39:50.324562    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:50.334712    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:39:50.334782    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:50.345048    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:39:50.345114    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:50.355477    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:39:50.355553    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:50.369297    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:39:50.369368    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:50.379507    4787 logs.go:276] 0 containers: []
	W0728 18:39:50.379517    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:50.379572    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:50.389716    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:39:50.389729    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:50.389734    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:50.425031    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:39:50.425044    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:39:50.442672    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:39:50.442686    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:39:50.456695    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:39:50.456709    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:39:50.468117    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:39:50.468130    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:39:50.479451    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:39:50.479465    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:39:50.496546    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:50.496559    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:50.521026    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:50.521036    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:50.525456    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:50.525464    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:50.561185    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:39:50.561196    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:39:50.575804    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:39:50.575815    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:39:50.587581    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:39:50.587591    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:39:50.599488    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:39:50.599498    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:53.115246    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:58.117364    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:58.117579    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:58.135681    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:39:58.135775    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:58.156447    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:39:58.156519    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:58.167902    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:39:58.167968    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:58.178640    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:39:58.178708    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:58.189390    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:39:58.189469    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:58.205272    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:39:58.205341    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:58.215825    4787 logs.go:276] 0 containers: []
	W0728 18:39:58.215836    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:58.215892    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:58.226064    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:39:58.226082    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:58.226087    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:58.230816    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:39:58.230824    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:39:58.246946    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:39:58.246960    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:39:58.259079    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:39:58.259093    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:39:58.270708    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:39:58.270721    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:39:58.288065    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:58.288075    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:58.323756    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:58.323762    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:58.363530    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:39:58.363541    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:39:58.378100    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:39:58.378111    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:39:58.392336    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:39:58.392348    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:39:58.403452    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:39:58.403462    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:39:58.418395    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:58.418406    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:58.442451    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:39:58.442459    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:00.956866    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:05.959219    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:05.959496    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:05.988596    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:40:05.988707    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:06.006500    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:40:06.006589    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:06.019622    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:40:06.019685    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:06.035835    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:40:06.035904    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:06.046223    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:40:06.046294    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:06.056796    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:40:06.056858    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:06.067041    4787 logs.go:276] 0 containers: []
	W0728 18:40:06.067052    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:06.067106    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:06.077688    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:40:06.077704    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:40:06.077709    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:40:06.096425    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:40:06.096439    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:40:06.109742    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:40:06.109753    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:40:06.124466    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:40:06.124480    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:40:06.136246    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:40:06.136256    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:40:06.166041    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:40:06.166055    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:40:06.179778    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:06.179792    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:06.184216    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:06.184234    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:06.220189    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:06.220204    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:06.244278    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:40:06.244295    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:40:06.256328    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:40:06.256342    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:06.267713    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:06.267728    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:06.303682    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:40:06.303697    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:40:08.819417    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:13.821714    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:13.821889    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:13.849752    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:40:13.849837    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:13.862818    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:40:13.862894    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:13.873899    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:40:13.873962    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:13.883978    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:40:13.884054    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:13.894475    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:40:13.894544    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:13.904565    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:40:13.904630    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:13.914796    4787 logs.go:276] 0 containers: []
	W0728 18:40:13.914812    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:13.914873    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:13.925026    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:40:13.925047    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:13.925053    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:13.948400    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:13.948408    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:13.982106    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:40:13.982114    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:40:13.993672    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:40:13.993681    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:40:14.008190    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:40:14.008201    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:40:14.022078    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:40:14.022089    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:40:14.033586    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:40:14.033597    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:40:14.044838    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:40:14.044849    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:40:14.059970    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:40:14.059981    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:40:14.078188    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:14.078199    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:14.083124    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:14.083133    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:14.118875    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:40:14.118887    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:40:14.130605    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:40:14.130616    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:16.644652    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:21.647167    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:21.647655    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:21.680072    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:40:21.680206    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:21.699363    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:40:21.699448    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:21.713009    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:40:21.713091    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:21.724504    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:40:21.724570    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:21.735707    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:40:21.735781    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:21.745957    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:40:21.746023    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:21.755892    4787 logs.go:276] 0 containers: []
	W0728 18:40:21.755908    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:21.755968    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:21.766338    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:40:21.766353    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:40:21.766358    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:40:21.778335    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:40:21.778347    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:21.790335    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:21.790346    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:21.795192    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:40:21.795201    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:40:21.809389    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:40:21.809400    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:40:21.820963    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:40:21.820976    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:40:21.833023    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:40:21.833037    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:40:21.844620    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:40:21.844634    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:40:21.861367    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:40:21.861377    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:40:21.878678    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:21.878690    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:21.902300    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:21.902312    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:21.937243    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:21.937250    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:21.973727    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:40:21.973738    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:40:24.490506    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:29.491827    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:29.491923    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:29.504085    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:40:29.504156    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:29.518854    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:40:29.518936    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:29.529563    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:40:29.529635    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:29.539780    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:40:29.539845    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:29.550276    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:40:29.550352    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:29.561012    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:40:29.561076    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:29.571029    4787 logs.go:276] 0 containers: []
	W0728 18:40:29.571040    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:29.571094    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:29.582149    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:40:29.582166    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:40:29.582171    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:40:29.594559    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:40:29.594575    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:40:29.607119    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:40:29.607129    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:40:29.618830    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:29.618841    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:29.655294    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:29.655308    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:29.659823    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:40:29.659834    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:40:29.673480    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:40:29.673493    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:40:29.684844    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:40:29.684855    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:40:29.700041    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:40:29.700053    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:40:29.719742    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:29.719755    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:29.753668    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:40:29.753676    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:40:29.764837    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:40:29.764849    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:40:29.776599    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:29.776611    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:29.801874    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:40:29.801882    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:29.813859    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:40:29.813870    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:40:32.341367    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:37.343745    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:37.343979    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:37.369107    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:40:37.369241    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:37.386709    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:40:37.386793    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:37.400994    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:40:37.401089    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:37.411691    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:40:37.411752    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:37.424467    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:40:37.424540    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:37.435321    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:40:37.435399    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:37.447312    4787 logs.go:276] 0 containers: []
	W0728 18:40:37.447328    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:37.447381    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:37.457962    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:40:37.457979    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:37.457985    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:37.463214    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:40:37.463220    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:40:37.477179    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:40:37.477189    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:40:37.488322    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:40:37.488333    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:40:37.500373    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:37.500385    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:37.534302    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:40:37.534320    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:40:37.547295    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:40:37.547309    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:40:37.562553    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:40:37.562564    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:40:37.586340    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:40:37.586351    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:37.598243    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:37.598257    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:37.634920    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:40:37.634934    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:40:37.648247    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:40:37.648265    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:40:37.660414    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:40:37.660424    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:40:37.674632    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:40:37.674642    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:40:37.686324    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:37.686336    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:40.213853    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:45.216007    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:45.216258    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:45.237098    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:40:45.237214    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:45.251769    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:40:45.251846    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:45.264114    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:40:45.264186    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:45.275458    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:40:45.275529    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:45.286252    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:40:45.286329    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:45.297130    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:40:45.297204    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:45.306967    4787 logs.go:276] 0 containers: []
	W0728 18:40:45.306979    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:45.307033    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:45.317474    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:40:45.317491    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:45.317496    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:45.341463    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:40:45.341473    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:40:45.355144    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:40:45.355155    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:40:45.366313    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:40:45.366324    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:40:45.377361    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:40:45.377372    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:45.389293    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:45.389304    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:45.425261    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:40:45.425271    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:40:45.437477    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:40:45.437491    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:40:45.449919    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:40:45.449931    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:40:45.464268    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:40:45.464280    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:40:45.476853    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:40:45.476867    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:40:45.494443    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:40:45.494456    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:40:45.509811    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:45.509823    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:45.546164    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:45.546173    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:45.550651    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:40:45.550659    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:40:48.064365    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:53.066729    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:53.067139    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:53.107482    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:40:53.107611    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:53.128646    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:40:53.128752    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:53.143211    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:40:53.143289    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:53.158107    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:40:53.158171    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:53.168975    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:40:53.169046    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:53.179495    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:40:53.179562    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:53.190191    4787 logs.go:276] 0 containers: []
	W0728 18:40:53.190205    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:53.190267    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:53.201654    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:40:53.201670    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:40:53.201677    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:40:53.213648    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:53.213659    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:53.250099    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:40:53.250113    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:40:53.263477    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:40:53.263490    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:40:53.278636    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:40:53.278648    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:40:53.290590    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:53.290601    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:53.295024    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:40:53.295032    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:40:53.310177    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:40:53.310186    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:40:53.323041    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:53.323054    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:53.348111    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:40:53.348120    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:40:53.359714    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:40:53.359725    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:53.371400    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:53.371410    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:53.405169    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:40:53.405178    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:40:53.419287    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:40:53.419298    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:40:53.434552    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:40:53.434562    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:40:55.960421    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:00.961730    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:00.962049    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:00.982690    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:41:00.982776    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:00.997471    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:41:00.997543    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:01.009219    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:41:01.009279    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:01.019974    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:41:01.020043    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:01.029970    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:41:01.030037    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:01.040549    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:41:01.040611    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:01.051264    4787 logs.go:276] 0 containers: []
	W0728 18:41:01.051278    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:01.051329    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:01.064970    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:41:01.064988    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:41:01.064993    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:41:01.083182    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:41:01.083192    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:41:01.100749    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:41:01.100759    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:41:01.112619    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:41:01.112629    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:01.128364    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:41:01.128375    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:41:01.141859    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:41:01.141869    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:41:01.156943    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:41:01.156954    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:41:01.168998    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:41:01.169009    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:41:01.183870    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:01.183880    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:01.208545    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:01.208553    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:01.243300    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:01.243309    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:01.279797    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:41:01.279813    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:41:01.291382    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:01.291399    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:01.295912    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:41:01.295920    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:41:01.307704    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:41:01.307714    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:41:03.823277    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:08.825505    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:08.825659    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:08.839720    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:41:08.839805    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:08.851892    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:41:08.851974    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:08.862897    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:41:08.862970    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:08.873707    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:41:08.873772    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:08.884805    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:41:08.884880    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:08.895171    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:41:08.895240    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:08.905540    4787 logs.go:276] 0 containers: []
	W0728 18:41:08.905549    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:08.905603    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:08.916538    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:41:08.916554    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:08.916559    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:08.921136    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:41:08.921146    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:41:08.932432    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:41:08.932442    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:41:08.943897    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:41:08.943910    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:41:08.954932    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:08.954944    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:09.005572    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:09.005583    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:09.031513    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:41:09.031524    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:41:09.042841    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:41:09.042852    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:41:09.058137    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:41:09.058149    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:09.069934    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:09.069948    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:09.104251    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:41:09.104261    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:41:09.119699    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:41:09.119712    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:41:09.139660    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:41:09.139669    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:41:09.151533    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:41:09.151545    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:41:09.164247    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:41:09.164256    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:41:11.684266    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:16.686455    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:16.686600    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:16.708290    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:41:16.708364    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:16.723044    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:41:16.723109    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:16.733807    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:41:16.733882    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:16.747107    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:41:16.747176    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:16.757436    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:41:16.757498    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:16.767633    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:41:16.767705    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:16.778105    4787 logs.go:276] 0 containers: []
	W0728 18:41:16.778118    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:16.778178    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:16.789199    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:41:16.789215    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:41:16.789220    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:41:16.800787    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:41:16.800797    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:41:16.812570    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:41:16.812584    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:41:16.826710    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:41:16.826723    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:41:16.838331    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:41:16.838341    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:41:16.853170    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:41:16.853183    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:41:16.864284    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:41:16.864295    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:41:16.881494    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:16.881507    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:16.907370    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:41:16.907382    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:16.918868    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:16.918878    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:16.954758    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:16.954768    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:16.959343    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:16.959349    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:16.994527    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:41:16.994537    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:41:17.008838    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:41:17.008847    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:41:17.020570    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:41:17.020584    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:41:19.537874    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:24.540217    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:24.540402    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:24.559587    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:41:24.559682    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:24.573869    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:41:24.573943    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:24.585939    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:41:24.586005    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:24.597013    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:41:24.597069    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:24.611674    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:41:24.611731    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:24.622030    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:41:24.622089    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:24.631807    4787 logs.go:276] 0 containers: []
	W0728 18:41:24.631818    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:24.631868    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:24.642640    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:41:24.642659    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:41:24.642665    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:41:24.663308    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:41:24.663320    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:41:24.675093    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:24.675104    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:24.701103    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:41:24.701121    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:41:24.714334    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:41:24.714352    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:24.729497    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:41:24.729513    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:41:24.753417    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:41:24.753431    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:41:24.765337    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:41:24.765348    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:41:24.776834    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:41:24.776847    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:41:24.788378    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:41:24.788389    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:41:24.806845    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:41:24.806855    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:41:24.818287    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:41:24.818301    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:41:24.833216    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:24.833226    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:24.867580    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:24.867595    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:24.871805    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:24.871816    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:27.407771    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:32.410064    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:32.410161    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:32.422052    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:41:32.422118    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:32.433448    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:41:32.433520    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:32.444121    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:41:32.444218    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:32.456882    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:41:32.456949    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:32.468594    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:41:32.468662    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:32.481567    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:41:32.481632    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:32.492011    4787 logs.go:276] 0 containers: []
	W0728 18:41:32.492020    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:32.492072    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:32.502438    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:41:32.502454    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:32.502459    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:32.507164    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:41:32.507170    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:41:32.521841    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:32.521850    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:32.555693    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:41:32.555708    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:41:32.567311    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:41:32.567321    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:32.581855    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:41:32.581869    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:41:32.595091    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:32.595106    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:32.630515    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:41:32.630530    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:41:32.645193    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:41:32.645204    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:41:32.657291    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:41:32.657304    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:41:32.676853    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:41:32.676866    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:41:32.688119    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:32.688132    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:32.712489    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:41:32.712496    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:41:32.728403    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:41:32.728413    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:41:32.740289    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:41:32.740304    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:41:35.258451    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:40.260611    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:40.260819    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:40.272760    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:41:40.272836    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:40.283360    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:41:40.283433    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:40.293715    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:41:40.293785    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:40.304279    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:41:40.304344    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:40.314927    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:41:40.314998    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:40.325702    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:41:40.325768    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:40.336803    4787 logs.go:276] 0 containers: []
	W0728 18:41:40.336819    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:40.336872    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:40.347390    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:41:40.347406    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:40.347411    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:40.351923    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:40.351932    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:40.392308    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:41:40.392323    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:41:40.406969    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:41:40.406979    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:41:40.418325    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:41:40.418335    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:41:40.435626    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:40.435640    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:40.459340    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:41:40.459353    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:40.470835    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:41:40.470852    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:41:40.485315    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:41:40.485324    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:41:40.496903    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:41:40.496916    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:41:40.508853    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:41:40.508864    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:41:40.520628    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:41:40.520643    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:41:40.532813    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:40.532824    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:40.567523    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:41:40.567533    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:41:40.582471    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:41:40.582482    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:41:43.096621    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:48.098827    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:48.098923    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:48.110024    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:41:48.110090    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:48.121104    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:41:48.121173    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:48.132967    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:41:48.133041    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:48.146675    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:41:48.146744    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:48.158935    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:41:48.159004    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:48.170272    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:41:48.170340    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:48.184732    4787 logs.go:276] 0 containers: []
	W0728 18:41:48.184745    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:48.184804    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:48.195310    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:41:48.195330    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:41:48.195335    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:41:48.207392    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:41:48.207404    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:41:48.234245    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:41:48.234258    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:48.246479    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:41:48.246489    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:41:48.262303    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:41:48.262317    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:41:48.274838    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:41:48.274849    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:41:48.286331    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:48.286342    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:48.311267    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:48.311276    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:48.347500    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:41:48.347515    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:41:48.370642    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:41:48.370654    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:41:48.382990    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:48.383001    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:48.418444    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:48.418454    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:48.423362    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:41:48.423368    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:41:48.437481    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:41:48.437492    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:41:48.454650    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:41:48.454662    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:41:50.968936    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:55.971047    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:55.971248    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:55.985652    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:41:55.985730    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:55.997440    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:41:55.997515    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:56.008046    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:41:56.008120    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:56.018990    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:41:56.019063    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:56.030392    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:41:56.030460    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:56.041197    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:41:56.041261    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:56.051248    4787 logs.go:276] 0 containers: []
	W0728 18:41:56.051259    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:56.051316    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:56.064388    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:41:56.064404    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:41:56.064410    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:56.075913    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:56.075928    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:56.080394    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:41:56.080402    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:41:56.094840    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:41:56.094852    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:41:56.116608    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:41:56.116618    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:41:56.128074    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:41:56.128084    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:41:56.142877    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:41:56.142887    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:41:56.157540    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:56.157552    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:56.195382    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:41:56.195395    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:41:56.207470    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:41:56.207485    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:41:56.219597    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:41:56.219609    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:41:56.237432    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:41:56.237447    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:41:56.253512    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:41:56.253524    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:41:56.265255    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:56.265266    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:56.288796    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:56.288805    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:58.826291    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:42:03.826623    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:03.826736    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:42:03.837866    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:42:03.837937    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:42:03.848309    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:42:03.848386    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:42:03.869063    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:42:03.869137    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:42:03.879869    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:42:03.879936    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:42:03.890573    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:42:03.890644    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:42:03.901277    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:42:03.901347    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:42:03.911286    4787 logs.go:276] 0 containers: []
	W0728 18:42:03.911301    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:42:03.911363    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:42:03.921769    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:42:03.921787    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:42:03.921792    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:42:03.937351    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:42:03.937361    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:42:03.948995    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:42:03.949006    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:42:03.960786    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:42:03.960797    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:42:03.996123    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:42:03.996131    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:42:04.000491    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:42:04.000498    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:42:04.037657    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:42:04.037668    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:42:04.050106    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:42:04.050120    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:42:04.062325    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:42:04.062336    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:42:04.073466    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:42:04.073478    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:42:04.096810    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:42:04.096820    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:42:04.111015    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:42:04.111040    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:42:04.127131    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:42:04.127143    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:42:04.138946    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:42:04.138958    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:42:04.151550    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:42:04.151561    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:42:06.672417    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:42:11.674158    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:11.678296    4787 out.go:177] 
	W0728 18:42:11.682298    4787 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0728 18:42:11.682316    4787 out.go:239] * 
	* 
	W0728 18:42:11.683566    4787 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:42:11.698091    4787 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-638000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-28 18:42:11.804991 -0700 PDT m=+3382.401595459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-638000 -n running-upgrade-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-638000 -n running-upgrade-638000: exit status 2 (15.660987375s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-638000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-777000          | force-systemd-flag-777000 | jenkins | v1.33.1 | 28 Jul 24 18:32 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-878000              | force-systemd-env-878000  | jenkins | v1.33.1 | 28 Jul 24 18:32 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-878000           | force-systemd-env-878000  | jenkins | v1.33.1 | 28 Jul 24 18:32 PDT | 28 Jul 24 18:32 PDT |
	| start   | -p docker-flags-864000                | docker-flags-864000       | jenkins | v1.33.1 | 28 Jul 24 18:32 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-777000             | force-systemd-flag-777000 | jenkins | v1.33.1 | 28 Jul 24 18:32 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-777000          | force-systemd-flag-777000 | jenkins | v1.33.1 | 28 Jul 24 18:32 PDT | 28 Jul 24 18:32 PDT |
	| start   | -p cert-expiration-273000             | cert-expiration-273000    | jenkins | v1.33.1 | 28 Jul 24 18:32 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-864000 ssh               | docker-flags-864000       | jenkins | v1.33.1 | 28 Jul 24 18:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-864000 ssh               | docker-flags-864000       | jenkins | v1.33.1 | 28 Jul 24 18:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-864000                | docker-flags-864000       | jenkins | v1.33.1 | 28 Jul 24 18:32 PDT | 28 Jul 24 18:32 PDT |
	| start   | -p cert-options-660000                | cert-options-660000       | jenkins | v1.33.1 | 28 Jul 24 18:32 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-660000 ssh               | cert-options-660000       | jenkins | v1.33.1 | 28 Jul 24 18:32 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-660000 -- sudo        | cert-options-660000       | jenkins | v1.33.1 | 28 Jul 24 18:32 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-660000                | cert-options-660000       | jenkins | v1.33.1 | 28 Jul 24 18:32 PDT | 28 Jul 24 18:32 PDT |
	| start   | -p running-upgrade-638000             | minikube                  | jenkins | v1.26.0 | 28 Jul 24 18:32 PDT | 28 Jul 24 18:33 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-638000             | running-upgrade-638000    | jenkins | v1.33.1 | 28 Jul 24 18:33 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-273000             | cert-expiration-273000    | jenkins | v1.33.1 | 28 Jul 24 18:35 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-273000             | cert-expiration-273000    | jenkins | v1.33.1 | 28 Jul 24 18:35 PDT | 28 Jul 24 18:35 PDT |
	| start   | -p kubernetes-upgrade-980000          | kubernetes-upgrade-980000 | jenkins | v1.33.1 | 28 Jul 24 18:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-980000          | kubernetes-upgrade-980000 | jenkins | v1.33.1 | 28 Jul 24 18:36 PDT | 28 Jul 24 18:36 PDT |
	| start   | -p kubernetes-upgrade-980000          | kubernetes-upgrade-980000 | jenkins | v1.33.1 | 28 Jul 24 18:36 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-980000          | kubernetes-upgrade-980000 | jenkins | v1.33.1 | 28 Jul 24 18:36 PDT | 28 Jul 24 18:36 PDT |
	| start   | -p stopped-upgrade-278000             | minikube                  | jenkins | v1.26.0 | 28 Jul 24 18:36 PDT | 28 Jul 24 18:36 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-278000 stop           | minikube                  | jenkins | v1.26.0 | 28 Jul 24 18:36 PDT | 28 Jul 24 18:36 PDT |
	| start   | -p stopped-upgrade-278000             | stopped-upgrade-278000    | jenkins | v1.33.1 | 28 Jul 24 18:36 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/28 18:36:59
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 18:36:59.153745    4935 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:36:59.153919    4935 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:36:59.153923    4935 out.go:304] Setting ErrFile to fd 2...
	I0728 18:36:59.153926    4935 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:36:59.154084    4935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:36:59.155213    4935 out.go:298] Setting JSON to false
	I0728 18:36:59.173325    4935 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3990,"bootTime":1722213029,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:36:59.173395    4935 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:36:59.177563    4935 out.go:177] * [stopped-upgrade-278000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:36:59.185497    4935 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:36:59.185538    4935 notify.go:220] Checking for updates...
	I0728 18:36:59.192021    4935 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:36:59.195478    4935 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:36:59.199557    4935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:36:59.200959    4935 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:36:59.204505    4935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:36:59.207860    4935 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:36:59.211506    4935 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0728 18:36:59.214598    4935 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:36:59.218489    4935 out.go:177] * Using the qemu2 driver based on existing profile
	I0728 18:36:59.225454    4935 start.go:297] selected driver: qemu2
	I0728 18:36:59.225459    4935 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50479 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0728 18:36:59.225511    4935 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:36:59.227898    4935 cni.go:84] Creating CNI manager for ""
	I0728 18:36:59.227915    4935 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:36:59.227937    4935 start.go:340] cluster config:
	{Name:stopped-upgrade-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50479 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0728 18:36:59.227995    4935 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:36:59.236487    4935 out.go:177] * Starting "stopped-upgrade-278000" primary control-plane node in "stopped-upgrade-278000" cluster
	I0728 18:36:59.240490    4935 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0728 18:36:59.240504    4935 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0728 18:36:59.240514    4935 cache.go:56] Caching tarball of preloaded images
	I0728 18:36:59.240561    4935 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:36:59.240566    4935 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0728 18:36:59.240615    4935 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/config.json ...
	I0728 18:36:59.241085    4935 start.go:360] acquireMachinesLock for stopped-upgrade-278000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:36:59.241117    4935 start.go:364] duration metric: took 25.125µs to acquireMachinesLock for "stopped-upgrade-278000"
	I0728 18:36:59.241126    4935 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:36:59.241132    4935 fix.go:54] fixHost starting: 
	I0728 18:36:59.241239    4935 fix.go:112] recreateIfNeeded on stopped-upgrade-278000: state=Stopped err=<nil>
	W0728 18:36:59.241248    4935 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:36:59.245472    4935 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-278000" ...
	I0728 18:36:59.891424    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:36:59.891604    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:36:59.911411    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:36:59.911505    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:36:59.929763    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:36:59.929830    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:36:59.941084    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:36:59.941149    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:36:59.951450    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:36:59.951519    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:36:59.967936    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:36:59.967997    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:36:59.978701    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:36:59.978755    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:36:59.988475    4787 logs.go:276] 0 containers: []
	W0728 18:36:59.988487    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:36:59.988532    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:36:59.999126    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:36:59.999145    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:36:59.999151    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:37:00.033458    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:37:00.033470    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:37:00.057470    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:37:00.057484    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:37:00.071758    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:37:00.071767    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:37:00.089243    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:37:00.089253    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:37:00.103932    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:37:00.103942    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:37:00.127028    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:37:00.127037    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:37:00.138593    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:37:00.138606    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:37:00.176836    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:37:00.176847    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:37:00.190477    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:37:00.190489    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:37:00.204345    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:37:00.204357    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:37:00.216209    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:37:00.216218    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:37:00.228969    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:37:00.228980    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:37:00.247214    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:37:00.247225    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:37:00.251433    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:37:00.251439    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:37:00.262401    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:37:00.262411    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:37:00.273578    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:37:00.273592    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:37:02.786915    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:36:59.253526    4935 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:36:59.253624    4935 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50445-:22,hostfwd=tcp::50446-:2376,hostname=stopped-upgrade-278000 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/disk.qcow2
	I0728 18:36:59.300661    4935 main.go:141] libmachine: STDOUT: 
	I0728 18:36:59.300695    4935 main.go:141] libmachine: STDERR: 
	I0728 18:36:59.300701    4935 main.go:141] libmachine: Waiting for VM to start (ssh -p 50445 docker@127.0.0.1)...
	I0728 18:37:07.787842    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:07.787967    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:37:07.800123    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:37:07.800199    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:37:07.811541    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:37:07.811626    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:37:07.822213    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:37:07.822280    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:37:07.833164    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:37:07.833232    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:37:07.843722    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:37:07.843781    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:37:07.854455    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:37:07.854516    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:37:07.867089    4787 logs.go:276] 0 containers: []
	W0728 18:37:07.867103    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:37:07.867161    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:37:07.882453    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:37:07.882475    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:37:07.882480    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:37:07.905364    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:37:07.905372    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:37:07.919943    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:37:07.919954    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:37:07.937523    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:37:07.937535    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:37:07.952573    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:37:07.952583    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:37:07.966672    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:37:07.966683    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:37:07.977921    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:37:07.977935    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:37:07.992635    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:37:07.992644    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:37:08.005264    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:37:08.005275    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:37:08.023614    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:37:08.023625    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:37:08.028481    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:37:08.028488    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:37:08.042663    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:37:08.042673    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:37:08.067377    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:37:08.067387    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:37:08.080317    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:37:08.080328    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:37:08.120783    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:37:08.120792    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:37:08.157138    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:37:08.157147    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:37:08.169309    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:37:08.169320    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:37:10.683484    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:15.685037    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:15.685513    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:37:15.727722    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:37:15.727904    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:37:15.756625    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:37:15.756713    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:37:15.774139    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:37:15.774215    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:37:15.785661    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:37:15.785721    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:37:15.800256    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:37:15.800333    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:37:15.812339    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:37:15.812410    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:37:15.822441    4787 logs.go:276] 0 containers: []
	W0728 18:37:15.822452    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:37:15.822505    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:37:15.836898    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:37:15.836917    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:37:15.836923    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:37:15.877975    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:37:15.877985    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:37:15.891274    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:37:15.891286    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:37:15.915538    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:37:15.915550    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:37:15.929679    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:37:15.929693    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:37:15.944501    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:37:15.944512    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:37:15.959644    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:37:15.959659    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:37:15.971670    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:37:15.971683    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:37:15.983594    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:37:15.983608    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:37:15.988285    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:37:15.988294    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:37:16.002791    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:37:16.002802    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:37:16.014483    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:37:16.014497    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:37:16.026301    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:37:16.026314    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:37:16.049783    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:37:16.049791    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:37:16.062332    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:37:16.062342    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:37:16.079864    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:37:16.079874    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:37:16.114356    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:37:16.114367    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:37:18.628482    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:19.213062    4935 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/config.json ...
	I0728 18:37:19.213728    4935 machine.go:94] provisionDockerMachine start ...
	I0728 18:37:19.213949    4935 main.go:141] libmachine: Using SSH client type: native
	I0728 18:37:19.214416    4935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ffea10] 0x101001270 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0728 18:37:19.214429    4935 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 18:37:19.286908    4935 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0728 18:37:19.286931    4935 buildroot.go:166] provisioning hostname "stopped-upgrade-278000"
	I0728 18:37:19.287009    4935 main.go:141] libmachine: Using SSH client type: native
	I0728 18:37:19.287167    4935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ffea10] 0x101001270 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0728 18:37:19.287174    4935 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-278000 && echo "stopped-upgrade-278000" | sudo tee /etc/hostname
	I0728 18:37:19.341921    4935 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-278000
	
	I0728 18:37:19.341981    4935 main.go:141] libmachine: Using SSH client type: native
	I0728 18:37:19.342090    4935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ffea10] 0x101001270 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0728 18:37:19.342099    4935 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-278000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-278000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-278000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 18:37:19.396221    4935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:37:19.396232    4935 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1229/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1229/.minikube}
	I0728 18:37:19.396239    4935 buildroot.go:174] setting up certificates
	I0728 18:37:19.396243    4935 provision.go:84] configureAuth start
	I0728 18:37:19.396254    4935 provision.go:143] copyHostCerts
	I0728 18:37:19.396336    4935 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.pem, removing ...
	I0728 18:37:19.396341    4935 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.pem
	I0728 18:37:19.396517    4935 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.pem (1082 bytes)
	I0728 18:37:19.397140    4935 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1229/.minikube/cert.pem, removing ...
	I0728 18:37:19.397143    4935 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1229/.minikube/cert.pem
	I0728 18:37:19.397203    4935 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1229/.minikube/cert.pem (1123 bytes)
	I0728 18:37:19.397325    4935 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1229/.minikube/key.pem, removing ...
	I0728 18:37:19.397328    4935 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1229/.minikube/key.pem
	I0728 18:37:19.397384    4935 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1229/.minikube/key.pem (1679 bytes)
	I0728 18:37:19.397477    4935 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-278000 san=[127.0.0.1 localhost minikube stopped-upgrade-278000]
	I0728 18:37:19.653996    4935 provision.go:177] copyRemoteCerts
	I0728 18:37:19.654049    4935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 18:37:19.654060    4935 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/id_rsa Username:docker}
	I0728 18:37:19.684034    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 18:37:19.691035    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 18:37:19.697760    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0728 18:37:19.704827    4935 provision.go:87] duration metric: took 308.578958ms to configureAuth
	I0728 18:37:19.704838    4935 buildroot.go:189] setting minikube options for container-runtime
	I0728 18:37:19.704951    4935 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:37:19.704983    4935 main.go:141] libmachine: Using SSH client type: native
	I0728 18:37:19.705072    4935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ffea10] 0x101001270 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0728 18:37:19.705076    4935 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 18:37:19.754818    4935 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 18:37:19.754826    4935 buildroot.go:70] root file system type: tmpfs
	I0728 18:37:19.754880    4935 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 18:37:19.754926    4935 main.go:141] libmachine: Using SSH client type: native
	I0728 18:37:19.755037    4935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ffea10] 0x101001270 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0728 18:37:19.755073    4935 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 18:37:19.809771    4935 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 18:37:19.809815    4935 main.go:141] libmachine: Using SSH client type: native
	I0728 18:37:19.809920    4935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ffea10] 0x101001270 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0728 18:37:19.809929    4935 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 18:37:20.147529    4935 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0728 18:37:20.147542    4935 machine.go:97] duration metric: took 933.805041ms to provisionDockerMachine
	I0728 18:37:20.147550    4935 start.go:293] postStartSetup for "stopped-upgrade-278000" (driver="qemu2")
	I0728 18:37:20.147557    4935 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 18:37:20.147632    4935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 18:37:20.147641    4935 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/id_rsa Username:docker}
	I0728 18:37:20.175210    4935 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 18:37:20.176591    4935 info.go:137] Remote host: Buildroot 2021.02.12
	I0728 18:37:20.176599    4935 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1229/.minikube/addons for local assets ...
	I0728 18:37:20.176678    4935 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1229/.minikube/files for local assets ...
	I0728 18:37:20.176797    4935 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1229/.minikube/files/etc/ssl/certs/17282.pem -> 17282.pem in /etc/ssl/certs
	I0728 18:37:20.176928    4935 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 18:37:20.179726    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/files/etc/ssl/certs/17282.pem --> /etc/ssl/certs/17282.pem (1708 bytes)
	I0728 18:37:20.186292    4935 start.go:296] duration metric: took 38.736834ms for postStartSetup
	I0728 18:37:20.186305    4935 fix.go:56] duration metric: took 20.945184542s for fixHost
	I0728 18:37:20.186335    4935 main.go:141] libmachine: Using SSH client type: native
	I0728 18:37:20.186433    4935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ffea10] 0x101001270 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0728 18:37:20.186437    4935 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0728 18:37:20.236512    4935 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722217040.154939754
	
	I0728 18:37:20.236521    4935 fix.go:216] guest clock: 1722217040.154939754
	I0728 18:37:20.236525    4935 fix.go:229] Guest: 2024-07-28 18:37:20.154939754 -0700 PDT Remote: 2024-07-28 18:37:20.186307 -0700 PDT m=+21.057955834 (delta=-31.367246ms)
	I0728 18:37:20.236536    4935 fix.go:200] guest clock delta is within tolerance: -31.367246ms
	I0728 18:37:20.236539    4935 start.go:83] releasing machines lock for "stopped-upgrade-278000", held for 20.995427s
	I0728 18:37:20.236606    4935 ssh_runner.go:195] Run: cat /version.json
	I0728 18:37:20.236616    4935 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/id_rsa Username:docker}
	I0728 18:37:20.236621    4935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 18:37:20.236640    4935 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/id_rsa Username:docker}
	W0728 18:37:20.261470    4935 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0728 18:37:20.261532    4935 ssh_runner.go:195] Run: systemctl --version
	I0728 18:37:20.263930    4935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0728 18:37:20.266147    4935 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 18:37:20.266181    4935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0728 18:37:20.270190    4935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0728 18:37:20.274796    4935 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 18:37:20.274807    4935 start.go:495] detecting cgroup driver to use...
	I0728 18:37:20.274879    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:37:20.283480    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0728 18:37:20.286583    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 18:37:20.289644    4935 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 18:37:20.289668    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 18:37:20.292997    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:37:20.296515    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 18:37:20.300224    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:37:20.303382    4935 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 18:37:20.306475    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 18:37:20.309480    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 18:37:20.312975    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 18:37:20.316148    4935 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 18:37:20.318882    4935 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 18:37:20.321395    4935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:37:20.385497    4935 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 18:37:20.396134    4935 start.go:495] detecting cgroup driver to use...
	I0728 18:37:20.396196    4935 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 18:37:20.404454    4935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:37:20.443309    4935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 18:37:20.449824    4935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:37:20.454736    4935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:37:20.459213    4935 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 18:37:20.516982    4935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:37:20.522135    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:37:20.527128    4935 ssh_runner.go:195] Run: which cri-dockerd
	I0728 18:37:20.528322    4935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 18:37:20.530924    4935 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 18:37:20.535592    4935 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 18:37:20.597945    4935 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 18:37:20.662015    4935 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 18:37:20.662076    4935 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 18:37:20.667331    4935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:37:20.730068    4935 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:37:21.893506    4935 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.163420708s)
	I0728 18:37:21.893557    4935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0728 18:37:21.898596    4935 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0728 18:37:21.904425    4935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:37:21.908770    4935 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0728 18:37:21.974487    4935 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 18:37:22.058446    4935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:37:22.122940    4935 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0728 18:37:22.128941    4935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:37:22.134052    4935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:37:22.186998    4935 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0728 18:37:22.225098    4935 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 18:37:22.225177    4935 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 18:37:22.229077    4935 start.go:563] Will wait 60s for crictl version
	I0728 18:37:22.229142    4935 ssh_runner.go:195] Run: which crictl
	I0728 18:37:22.230579    4935 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0728 18:37:22.244854    4935 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0728 18:37:22.244929    4935 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:37:22.260791    4935 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:37:23.631098    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:23.631204    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:37:23.644843    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:37:23.644928    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:37:23.656358    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:37:23.656436    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:37:23.668226    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:37:23.668295    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:37:23.687119    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:37:23.687198    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:37:23.701407    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:37:23.701587    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:37:23.715165    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:37:23.715236    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:37:23.728307    4787 logs.go:276] 0 containers: []
	W0728 18:37:23.728321    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:37:23.728382    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:37:23.741064    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:37:23.741083    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:37:23.741089    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:37:23.746320    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:37:23.746332    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:37:23.761684    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:37:23.761700    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:37:23.774552    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:37:23.774563    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:37:23.801535    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:37:23.801556    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:37:23.846174    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:37:23.846193    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:37:22.280278    4935 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0728 18:37:22.280342    4935 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0728 18:37:22.281753    4935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:37:22.285525    4935 kubeadm.go:883] updating cluster {Name:stopped-upgrade-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50479 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0728 18:37:22.285572    4935 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0728 18:37:22.285611    4935 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 18:37:22.295982    4935 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0728 18:37:22.295989    4935 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0728 18:37:22.296030    4935 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0728 18:37:22.299433    4935 ssh_runner.go:195] Run: which lz4
	I0728 18:37:22.300725    4935 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0728 18:37:22.302042    4935 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0728 18:37:22.302053    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0728 18:37:23.253604    4935 docker.go:649] duration metric: took 952.908041ms to copy over tarball
	I0728 18:37:23.253665    4935 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0728 18:37:23.893395    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:37:23.893412    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:37:23.920948    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:37:23.920974    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:37:23.936784    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:37:23.936796    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:37:23.952475    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:37:23.952492    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:37:23.971443    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:37:23.971458    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:37:23.984477    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:37:23.984495    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:37:24.001267    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:37:24.001285    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:37:24.019664    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:37:24.019682    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:37:24.032628    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:37:24.032640    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:37:24.046245    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:37:24.046257    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:37:24.059452    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:37:24.059466    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:37:26.574779    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:24.453469    4935 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.199787541s)
	I0728 18:37:24.453484    4935 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0728 18:37:24.469418    4935 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0728 18:37:24.472802    4935 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0728 18:37:24.477960    4935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:37:24.542509    4935 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:37:26.184629    4935 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.642102208s)
	I0728 18:37:26.184722    4935 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 18:37:26.201487    4935 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0728 18:37:26.201495    4935 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0728 18:37:26.201500    4935 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0728 18:37:26.206738    4935 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0728 18:37:26.208450    4935 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:37:26.210226    4935 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0728 18:37:26.210263    4935 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0728 18:37:26.212225    4935 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:37:26.212282    4935 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0728 18:37:26.213824    4935 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0728 18:37:26.214226    4935 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0728 18:37:26.215267    4935 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0728 18:37:26.215735    4935 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0728 18:37:26.216880    4935 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0728 18:37:26.216909    4935 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:37:26.217840    4935 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0728 18:37:26.217842    4935 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0728 18:37:26.218490    4935 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:37:26.219070    4935 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0728 18:37:26.584452    4935 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0728 18:37:26.596287    4935 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0728 18:37:26.596307    4935 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0728 18:37:26.596360    4935 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0728 18:37:26.599692    4935 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0728 18:37:26.609023    4935 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0728 18:37:26.614601    4935 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0728 18:37:26.614619    4935 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0728 18:37:26.614668    4935 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0728 18:37:26.624586    4935 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0728 18:37:26.625435    4935 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0728 18:37:26.635309    4935 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0728 18:37:26.635330    4935 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0728 18:37:26.635381    4935 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0728 18:37:26.645802    4935 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0728 18:37:26.649779    4935 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0728 18:37:26.660122    4935 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0728 18:37:26.660142    4935 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0728 18:37:26.660193    4935 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0728 18:37:26.660508    4935 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0728 18:37:26.672331    4935 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0728 18:37:26.672353    4935 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0728 18:37:26.672390    4935 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0728 18:37:26.672408    4935 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0728 18:37:26.681744    4935 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0728 18:37:26.681857    4935 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0728 18:37:26.684584    4935 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0728 18:37:26.684594    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0728 18:37:26.692189    4935 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0728 18:37:26.692198    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0728 18:37:26.705260    4935 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0728 18:37:26.705378    4935 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:37:26.714552    4935 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0728 18:37:26.722206    4935 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0728 18:37:26.728336    4935 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0728 18:37:26.728358    4935 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:37:26.728412    4935 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:37:26.738162    4935 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0728 18:37:26.738188    4935 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0728 18:37:26.738244    4935 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0728 18:37:26.742720    4935 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0728 18:37:26.742856    4935 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0728 18:37:26.751924    4935 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0728 18:37:26.751925    4935 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0728 18:37:26.751959    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0728 18:37:26.793073    4935 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0728 18:37:26.793087    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0728 18:37:26.829612    4935 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0728 18:37:26.978159    4935 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0728 18:37:26.978315    4935 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:37:26.996823    4935 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0728 18:37:26.996851    4935 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:37:26.996923    4935 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:37:27.014660    4935 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0728 18:37:27.014786    4935 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0728 18:37:27.016347    4935 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0728 18:37:27.016360    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0728 18:37:27.045032    4935 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0728 18:37:27.045046    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0728 18:37:27.288746    4935 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0728 18:37:27.288788    4935 cache_images.go:92] duration metric: took 1.087280917s to LoadCachedImages
	W0728 18:37:27.288826    4935 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0728 18:37:27.288833    4935 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0728 18:37:27.288879    4935 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-278000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0728 18:37:27.288937    4935 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 18:37:27.302901    4935 cni.go:84] Creating CNI manager for ""
	I0728 18:37:27.302914    4935 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:37:27.302918    4935 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0728 18:37:27.302927    4935 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-278000 NodeName:stopped-upgrade-278000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0728 18:37:27.302996    4935 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-278000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 18:37:27.303045    4935 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0728 18:37:27.306244    4935 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 18:37:27.306277    4935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 18:37:27.309058    4935 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0728 18:37:27.314344    4935 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 18:37:27.319081    4935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0728 18:37:27.324276    4935 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0728 18:37:27.325582    4935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:37:27.329426    4935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:37:27.393987    4935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:37:27.404039    4935 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000 for IP: 10.0.2.15
	I0728 18:37:27.404049    4935 certs.go:194] generating shared ca certs ...
	I0728 18:37:27.404058    4935 certs.go:226] acquiring lock for ca certs: {Name:mkc846ff99a644cdf9e42c80143f563c1808731e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:37:27.404224    4935 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.key
	I0728 18:37:27.404287    4935 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/proxy-client-ca.key
	I0728 18:37:27.404296    4935 certs.go:256] generating profile certs ...
	I0728 18:37:27.404377    4935 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/client.key
	I0728 18:37:27.404396    4935 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.key.bc91ceae
	I0728 18:37:27.404407    4935 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.crt.bc91ceae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0728 18:37:27.491632    4935 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.crt.bc91ceae ...
	I0728 18:37:27.491648    4935 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.crt.bc91ceae: {Name:mk7ce09ea1f4e1e0adc458a4492d3e91736b42dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:37:27.493065    4935 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.key.bc91ceae ...
	I0728 18:37:27.493073    4935 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.key.bc91ceae: {Name:mkd7d851e0b6b2aa160e38a41ed99c247a312f74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:37:27.493232    4935 certs.go:381] copying /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.crt.bc91ceae -> /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.crt
	I0728 18:37:27.493394    4935 certs.go:385] copying /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.key.bc91ceae -> /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.key
	I0728 18:37:27.493552    4935 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/proxy-client.key
	I0728 18:37:27.493691    4935 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/1728.pem (1338 bytes)
	W0728 18:37:27.493722    4935 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/1728_empty.pem, impossibly tiny 0 bytes
	I0728 18:37:27.493728    4935 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca-key.pem (1675 bytes)
	I0728 18:37:27.493747    4935 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem (1082 bytes)
	I0728 18:37:27.493766    4935 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem (1123 bytes)
	I0728 18:37:27.493783    4935 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/key.pem (1679 bytes)
	I0728 18:37:27.493819    4935 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/files/etc/ssl/certs/17282.pem (1708 bytes)
	I0728 18:37:27.494150    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 18:37:27.501137    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 18:37:27.508695    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 18:37:27.516221    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0728 18:37:27.523492    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0728 18:37:27.530161    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 18:37:27.537311    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 18:37:27.544657    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0728 18:37:27.552030    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0728 18:37:27.558730    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/files/etc/ssl/certs/17282.pem --> /usr/share/ca-certificates/17282.pem (1708 bytes)
	I0728 18:37:27.565290    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 18:37:27.572375    4935 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 18:37:27.577651    4935 ssh_runner.go:195] Run: openssl version
	I0728 18:37:27.579528    4935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0728 18:37:27.582350    4935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0728 18:37:27.583774    4935 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:54 /usr/share/ca-certificates/1728.pem
	I0728 18:37:27.583792    4935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0728 18:37:27.585536    4935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0728 18:37:27.588889    4935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17282.pem && ln -fs /usr/share/ca-certificates/17282.pem /etc/ssl/certs/17282.pem"
	I0728 18:37:27.592280    4935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17282.pem
	I0728 18:37:27.593734    4935 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:54 /usr/share/ca-certificates/17282.pem
	I0728 18:37:27.593750    4935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17282.pem
	I0728 18:37:27.595642    4935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17282.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 18:37:27.598480    4935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 18:37:27.601420    4935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:37:27.602807    4935 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:46 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:37:27.602824    4935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:37:27.604396    4935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 18:37:27.607380    4935 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0728 18:37:27.608870    4935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0728 18:37:27.610828    4935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0728 18:37:27.612557    4935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0728 18:37:27.614631    4935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0728 18:37:27.616375    4935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0728 18:37:27.618194    4935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0728 18:37:27.619992    4935 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50479 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0728 18:37:27.620069    4935 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 18:37:27.630345    4935 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 18:37:27.633588    4935 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0728 18:37:27.633593    4935 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0728 18:37:27.633612    4935 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 18:37:27.637256    4935 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 18:37:27.637566    4935 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-278000" does not appear in /Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:37:27.637674    4935 kubeconfig.go:62] /Users/jenkins/minikube-integration/19312-1229/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-278000" cluster setting kubeconfig missing "stopped-upgrade-278000" context setting]
	I0728 18:37:27.637862    4935 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/kubeconfig: {Name:mk193de249a2c701b098e889c731f2b64761e39f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:37:27.638311    4935 kapi.go:59] client config for stopped-upgrade-278000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023945c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:37:27.638638    4935 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 18:37:27.641470    4935 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-278000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0728 18:37:27.641477    4935 kubeadm.go:1160] stopping kube-system containers ...
	I0728 18:37:27.641517    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 18:37:27.652570    4935 docker.go:483] Stopping containers: [912ef6eb9272 248ada8e5eb9 28fa0bcdbb2a b959039eb684 0ffba4e92043 988ccb20029d c67d661575ed ed9398b7868e]
	I0728 18:37:27.652632    4935 ssh_runner.go:195] Run: docker stop 912ef6eb9272 248ada8e5eb9 28fa0bcdbb2a b959039eb684 0ffba4e92043 988ccb20029d c67d661575ed ed9398b7868e
	I0728 18:37:27.663715    4935 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 18:37:27.669085    4935 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 18:37:27.672501    4935 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 18:37:27.672508    4935 kubeadm.go:157] found existing configuration files:
	
	I0728 18:37:27.672539    4935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/admin.conf
	I0728 18:37:27.675477    4935 kubeadm.go:163] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0728 18:37:27.675498    4935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0728 18:37:27.677977    4935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/kubelet.conf
	I0728 18:37:27.680724    4935 kubeadm.go:163] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0728 18:37:27.680748    4935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0728 18:37:27.683761    4935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/controller-manager.conf
	I0728 18:37:27.686129    4935 kubeadm.go:163] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0728 18:37:27.686152    4935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 18:37:27.689000    4935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/scheduler.conf
	I0728 18:37:27.691989    4935 kubeadm.go:163] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0728 18:37:27.692016    4935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 18:37:27.694524    4935 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 18:37:27.697363    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:37:27.719565    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:37:28.156996    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:37:28.270076    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:37:28.292612    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:37:28.318480    4935 api_server.go:52] waiting for apiserver process to appear ...
	I0728 18:37:28.318555    4935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:37:28.819287    4935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:37:31.576934    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:31.577033    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:37:31.588141    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:37:31.588209    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:37:31.599051    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:37:31.599122    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:37:31.609931    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:37:31.609994    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:37:31.621792    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:37:31.621864    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:37:31.637682    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:37:31.637753    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:37:31.648927    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:37:31.649003    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:37:31.659472    4787 logs.go:276] 0 containers: []
	W0728 18:37:31.659488    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:37:31.659551    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:37:31.678469    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:37:31.678487    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:37:31.678492    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:37:31.703160    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:37:31.703175    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:37:31.717863    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:37:31.717875    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:37:31.735893    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:37:31.735904    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:37:31.748264    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:37:31.748276    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:37:31.762929    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:37:31.762939    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:37:31.775349    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:37:31.775361    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:37:31.787831    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:37:31.787845    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:37:31.802753    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:37:31.802767    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:37:31.814653    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:37:31.814663    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:37:31.819422    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:37:31.819431    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:37:31.838086    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:37:31.838097    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:37:31.849726    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:37:31.849738    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:37:31.862241    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:37:31.862253    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:37:31.906533    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:37:31.906550    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:37:31.946017    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:37:31.946036    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:37:31.973543    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:37:31.973565    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:37:29.319426    4935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:37:29.323734    4935 api_server.go:72] duration metric: took 1.005257042s to wait for apiserver process to appear ...
	I0728 18:37:29.323742    4935 api_server.go:88] waiting for apiserver healthz status ...
	I0728 18:37:29.323750    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:34.488223    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:34.324539    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:34.324561    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:39.490501    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:39.490629    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:37:39.502040    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:37:39.502104    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:37:39.512930    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:37:39.513008    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:37:39.523438    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:37:39.523504    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:37:39.534162    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:37:39.534237    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:37:39.545101    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:37:39.545171    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:37:39.555211    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:37:39.555276    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:37:39.565469    4787 logs.go:276] 0 containers: []
	W0728 18:37:39.565487    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:37:39.565539    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:37:39.575777    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:37:39.575792    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:37:39.575797    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:37:39.587070    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:37:39.587080    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:37:39.625560    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:37:39.625570    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:37:39.629928    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:37:39.629934    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:37:39.641302    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:37:39.641314    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:37:39.659161    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:37:39.659175    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:37:39.682230    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:37:39.682243    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:37:39.693813    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:37:39.693822    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:37:39.705311    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:37:39.705321    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:37:39.727659    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:37:39.727667    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:37:39.739490    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:37:39.739502    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:37:39.753582    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:37:39.753593    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:37:39.767722    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:37:39.767731    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:37:39.782230    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:37:39.782240    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:37:39.793658    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:37:39.793669    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:37:39.829041    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:37:39.829056    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:37:39.844454    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:37:39.844465    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:37:42.357522    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:39.325805    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:39.325871    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:47.359696    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:47.359855    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:37:47.371578    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:37:47.371651    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:37:47.381416    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:37:47.381495    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:37:47.392198    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:37:47.392261    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:37:47.402704    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:37:47.402768    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:37:47.412750    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:37:47.412818    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:37:47.423147    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:37:47.423210    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:37:47.435597    4787 logs.go:276] 0 containers: []
	W0728 18:37:47.435609    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:37:47.435663    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:37:47.446145    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:37:47.446164    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:37:47.446171    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:37:47.460312    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:37:47.460322    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:37:47.474687    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:37:47.474700    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:37:47.487970    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:37:47.487983    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:37:47.527834    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:37:47.527848    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:37:47.565890    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:37:47.565902    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:37:47.587836    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:37:47.587844    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:37:47.591925    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:37:47.591932    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:37:47.603372    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:37:47.603383    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:37:47.614429    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:37:47.614440    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:37:47.626723    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:37:47.626735    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:37:47.638794    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:37:47.638805    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:37:47.651222    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:37:47.651233    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:37:47.662891    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:37:47.662902    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:37:47.677030    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:37:47.677039    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:37:47.698715    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:37:47.698725    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:37:47.723190    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:37:47.723204    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:37:44.326393    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:44.326418    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:50.239423    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:49.326790    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:49.326818    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:55.241718    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:55.241873    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:37:55.253726    4787 logs.go:276] 2 containers: [2c332dd607ad a6ff8b1ad69d]
	I0728 18:37:55.253806    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:37:55.264636    4787 logs.go:276] 2 containers: [b64c5d7b3875 2d0363e75992]
	I0728 18:37:55.264707    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:37:55.275150    4787 logs.go:276] 1 containers: [6a2a80526e69]
	I0728 18:37:55.275215    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:37:55.285769    4787 logs.go:276] 2 containers: [4c98e709ff56 8369608b1758]
	I0728 18:37:55.285839    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:37:55.301276    4787 logs.go:276] 1 containers: [cab0edcf2d94]
	I0728 18:37:55.301349    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:37:55.312304    4787 logs.go:276] 2 containers: [ff940487610c 58e1b88fc31d]
	I0728 18:37:55.312369    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:37:55.328078    4787 logs.go:276] 0 containers: []
	W0728 18:37:55.328091    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:37:55.328148    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:37:55.338469    4787 logs.go:276] 2 containers: [cf4ebeaaa901 66765f844c41]
	I0728 18:37:55.338492    4787 logs.go:123] Gathering logs for etcd [2d0363e75992] ...
	I0728 18:37:55.338499    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0363e75992"
	I0728 18:37:55.352954    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:37:55.352964    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:37:55.376096    4787 logs.go:123] Gathering logs for kube-apiserver [a6ff8b1ad69d] ...
	I0728 18:37:55.376104    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ff8b1ad69d"
	I0728 18:37:55.399369    4787 logs.go:123] Gathering logs for kube-scheduler [4c98e709ff56] ...
	I0728 18:37:55.399380    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c98e709ff56"
	I0728 18:37:55.410916    4787 logs.go:123] Gathering logs for kube-scheduler [8369608b1758] ...
	I0728 18:37:55.410927    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8369608b1758"
	I0728 18:37:55.426100    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:37:55.426110    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:37:55.430357    4787 logs.go:123] Gathering logs for coredns [6a2a80526e69] ...
	I0728 18:37:55.430365    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2a80526e69"
	I0728 18:37:55.441519    4787 logs.go:123] Gathering logs for kube-controller-manager [ff940487610c] ...
	I0728 18:37:55.441528    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff940487610c"
	I0728 18:37:55.458914    4787 logs.go:123] Gathering logs for storage-provisioner [cf4ebeaaa901] ...
	I0728 18:37:55.458923    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf4ebeaaa901"
	I0728 18:37:55.470550    4787 logs.go:123] Gathering logs for etcd [b64c5d7b3875] ...
	I0728 18:37:55.470560    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64c5d7b3875"
	I0728 18:37:55.483968    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:37:55.483977    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:37:55.520100    4787 logs.go:123] Gathering logs for kube-apiserver [2c332dd607ad] ...
	I0728 18:37:55.520109    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c332dd607ad"
	I0728 18:37:55.534302    4787 logs.go:123] Gathering logs for kube-proxy [cab0edcf2d94] ...
	I0728 18:37:55.534314    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab0edcf2d94"
	I0728 18:37:55.545722    4787 logs.go:123] Gathering logs for kube-controller-manager [58e1b88fc31d] ...
	I0728 18:37:55.545733    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58e1b88fc31d"
	I0728 18:37:55.557695    4787 logs.go:123] Gathering logs for storage-provisioner [66765f844c41] ...
	I0728 18:37:55.557706    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66765f844c41"
	I0728 18:37:55.569645    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:37:55.569657    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:37:55.581720    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:37:55.581730    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:37:58.123642    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:54.327279    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:54.327316    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:03.126042    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:03.126117    4787 kubeadm.go:597] duration metric: took 4m5.049564458s to restartPrimaryControlPlane
	W0728 18:38:03.126186    4787 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0728 18:38:03.126216    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0728 18:37:59.328000    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:59.328033    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:04.085474    4787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:38:04.090573    4787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 18:38:04.093417    4787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 18:38:04.096224    4787 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 18:38:04.096230    4787 kubeadm.go:157] found existing configuration files:
	
	I0728 18:38:04.096257    4787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf
	I0728 18:38:04.098675    4787 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0728 18:38:04.098698    4787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0728 18:38:04.101432    4787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf
	I0728 18:38:04.104530    4787 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0728 18:38:04.104549    4787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0728 18:38:04.107359    4787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf
	I0728 18:38:04.109684    4787 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0728 18:38:04.109707    4787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 18:38:04.112752    4787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf
	I0728 18:38:04.115458    4787 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0728 18:38:04.115479    4787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 18:38:04.117892    4787 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0728 18:38:04.133914    4787 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0728 18:38:04.133974    4787 kubeadm.go:310] [preflight] Running pre-flight checks
	I0728 18:38:04.183700    4787 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0728 18:38:04.183757    4787 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0728 18:38:04.183811    4787 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0728 18:38:04.233404    4787 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0728 18:38:04.237597    4787 out.go:204]   - Generating certificates and keys ...
	I0728 18:38:04.237657    4787 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0728 18:38:04.237699    4787 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0728 18:38:04.237808    4787 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0728 18:38:04.237898    4787 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0728 18:38:04.237956    4787 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0728 18:38:04.237984    4787 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0728 18:38:04.238042    4787 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0728 18:38:04.238078    4787 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0728 18:38:04.238182    4787 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0728 18:38:04.238281    4787 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0728 18:38:04.238329    4787 kubeadm.go:310] [certs] Using the existing "sa" key
	I0728 18:38:04.238434    4787 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0728 18:38:04.382907    4787 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0728 18:38:04.465069    4787 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0728 18:38:04.618178    4787 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0728 18:38:04.758063    4787 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0728 18:38:04.786162    4787 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0728 18:38:04.786543    4787 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0728 18:38:04.786642    4787 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0728 18:38:04.877102    4787 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0728 18:38:04.880372    4787 out.go:204]   - Booting up control plane ...
	I0728 18:38:04.880434    4787 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0728 18:38:04.880484    4787 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0728 18:38:04.880550    4787 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0728 18:38:04.880624    4787 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0728 18:38:04.881327    4787 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0728 18:38:04.328825    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:04.328842    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:09.382904    4787 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501411 seconds
	I0728 18:38:09.383050    4787 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0728 18:38:09.387549    4787 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0728 18:38:09.905403    4787 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0728 18:38:09.905718    4787 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-638000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0728 18:38:10.409417    4787 kubeadm.go:310] [bootstrap-token] Using token: k7ek6g.vvicwoh071co5a96
	I0728 18:38:10.411871    4787 out.go:204]   - Configuring RBAC rules ...
	I0728 18:38:10.411930    4787 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0728 18:38:10.411974    4787 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0728 18:38:10.416564    4787 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0728 18:38:10.417602    4787 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0728 18:38:10.418567    4787 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0728 18:38:10.419517    4787 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0728 18:38:10.428713    4787 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0728 18:38:10.622681    4787 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0728 18:38:10.813056    4787 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0728 18:38:10.813449    4787 kubeadm.go:310] 
	I0728 18:38:10.813483    4787 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0728 18:38:10.813486    4787 kubeadm.go:310] 
	I0728 18:38:10.813524    4787 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0728 18:38:10.813527    4787 kubeadm.go:310] 
	I0728 18:38:10.813539    4787 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0728 18:38:10.813576    4787 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0728 18:38:10.813604    4787 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0728 18:38:10.813609    4787 kubeadm.go:310] 
	I0728 18:38:10.813637    4787 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0728 18:38:10.813641    4787 kubeadm.go:310] 
	I0728 18:38:10.813663    4787 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0728 18:38:10.813669    4787 kubeadm.go:310] 
	I0728 18:38:10.813699    4787 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0728 18:38:10.813739    4787 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0728 18:38:10.813784    4787 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0728 18:38:10.813787    4787 kubeadm.go:310] 
	I0728 18:38:10.813831    4787 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0728 18:38:10.813872    4787 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0728 18:38:10.813875    4787 kubeadm.go:310] 
	I0728 18:38:10.813924    4787 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k7ek6g.vvicwoh071co5a96 \
	I0728 18:38:10.813981    4787 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c4c1501be84d6e769376a12e79a88eb62c7fa74cf7059e57b30ba292796da81b \
	I0728 18:38:10.813993    4787 kubeadm.go:310] 	--control-plane 
	I0728 18:38:10.813996    4787 kubeadm.go:310] 
	I0728 18:38:10.814044    4787 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0728 18:38:10.814048    4787 kubeadm.go:310] 
	I0728 18:38:10.814091    4787 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k7ek6g.vvicwoh071co5a96 \
	I0728 18:38:10.814155    4787 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c4c1501be84d6e769376a12e79a88eb62c7fa74cf7059e57b30ba292796da81b 
	I0728 18:38:10.814214    4787 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0728 18:38:10.814234    4787 cni.go:84] Creating CNI manager for ""
	I0728 18:38:10.814242    4787 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:38:10.821210    4787 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0728 18:38:10.825291    4787 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0728 18:38:10.828264    4787 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0728 18:38:10.833099    4787 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 18:38:10.833138    4787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:38:10.833173    4787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-638000 minikube.k8s.io/updated_at=2024_07_28T18_38_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=running-upgrade-638000 minikube.k8s.io/primary=true
	I0728 18:38:10.883913    4787 ops.go:34] apiserver oom_adj: -16
	I0728 18:38:10.883928    4787 kubeadm.go:1113] duration metric: took 50.823042ms to wait for elevateKubeSystemPrivileges
	I0728 18:38:10.883935    4787 kubeadm.go:394] duration metric: took 4m12.8210955s to StartCluster
	I0728 18:38:10.883945    4787 settings.go:142] acquiring lock: {Name:mk87b264018a6cee2b66b065d01a79c5a5adf3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:38:10.884046    4787 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:38:10.884420    4787 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/kubeconfig: {Name:mk193de249a2c701b098e889c731f2b64761e39f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:38:10.884638    4787 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:38:10.884650    4787 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0728 18:38:10.884688    4787 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-638000"
	I0728 18:38:10.884700    4787 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-638000"
	W0728 18:38:10.884707    4787 addons.go:243] addon storage-provisioner should already be in state true
	I0728 18:38:10.884718    4787 host.go:66] Checking if "running-upgrade-638000" exists ...
	I0728 18:38:10.884731    4787 config.go:182] Loaded profile config "running-upgrade-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:38:10.884741    4787 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-638000"
	I0728 18:38:10.884769    4787 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-638000"
	I0728 18:38:10.884982    4787 retry.go:31] will retry after 1.026558627s: connect: dial unix /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/running-upgrade-638000/monitor: connect: connection refused
	I0728 18:38:10.885725    4787 kapi.go:59] client config for running-upgrade-638000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/running-upgrade-638000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10242c5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:38:10.885880    4787 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-638000"
	W0728 18:38:10.885887    4787 addons.go:243] addon default-storageclass should already be in state true
	I0728 18:38:10.885894    4787 host.go:66] Checking if "running-upgrade-638000" exists ...
	I0728 18:38:10.886408    4787 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 18:38:10.886413    4787 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 18:38:10.886418    4787 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/running-upgrade-638000/id_rsa Username:docker}
	I0728 18:38:10.888270    4787 out.go:177] * Verifying Kubernetes components...
	I0728 18:38:10.896273    4787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:38:10.989127    4787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:38:10.994737    4787 api_server.go:52] waiting for apiserver process to appear ...
	I0728 18:38:10.994781    4787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:38:10.998723    4787 api_server.go:72] duration metric: took 114.074583ms to wait for apiserver process to appear ...
	I0728 18:38:10.998732    4787 api_server.go:88] waiting for apiserver healthz status ...
	I0728 18:38:10.998740    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:11.062706    4787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 18:38:11.918756    4787 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:38:11.921684    4787 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 18:38:11.921692    4787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 18:38:11.921705    4787 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/running-upgrade-638000/id_rsa Username:docker}
	I0728 18:38:11.966421    4787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 18:38:09.329805    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:09.329827    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:16.000849    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:16.000888    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:14.330552    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:14.330573    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:21.001227    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:21.001279    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:19.331974    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:19.332009    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:25.996810    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:25.996863    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:24.332722    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:24.332744    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:30.988364    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:30.988421    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:29.324884    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:29.325057    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:38:29.345181    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:38:29.345259    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:38:29.355742    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:38:29.355811    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:38:29.366642    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:38:29.366714    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:38:29.377746    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:38:29.377817    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:38:29.388353    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:38:29.388429    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:38:29.398750    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:38:29.398832    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:38:29.409066    4935 logs.go:276] 0 containers: []
	W0728 18:38:29.409078    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:38:29.409135    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:38:29.419308    4935 logs.go:276] 0 containers: []
	W0728 18:38:29.419324    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:38:29.419332    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:38:29.419338    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:38:29.423837    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:38:29.423844    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:38:29.464008    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:38:29.464016    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:38:29.475868    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:38:29.475878    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:38:29.491290    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:38:29.491304    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:38:29.509303    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:38:29.509318    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:38:29.523316    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:38:29.523329    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:38:29.549878    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:38:29.549892    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:38:29.568737    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:38:29.568751    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:38:29.579845    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:38:29.579856    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:38:29.605191    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:38:29.605209    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:38:29.617322    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:38:29.617334    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:38:29.702188    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:38:29.702201    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:38:29.716761    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:38:29.716775    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:38:29.731994    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:38:29.732006    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:38:32.241939    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:35.982505    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:35.982549    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:37.238320    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:37.238482    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:38:37.249456    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:38:37.249544    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:38:37.260739    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:38:37.260815    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:38:37.271569    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:38:37.271639    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:38:37.282187    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:38:37.282278    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:38:37.292862    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:38:37.292935    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:38:37.307311    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:38:37.307384    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:38:37.317520    4935 logs.go:276] 0 containers: []
	W0728 18:38:37.317535    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:38:37.317599    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:38:37.328078    4935 logs.go:276] 0 containers: []
	W0728 18:38:37.328088    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:38:37.328097    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:38:37.328103    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:38:37.339849    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:38:37.339861    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:38:37.344692    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:38:37.344698    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:38:37.382055    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:38:37.382069    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:38:37.396272    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:38:37.396282    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:38:37.414781    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:38:37.414792    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:38:37.427841    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:38:37.427852    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:38:37.452925    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:38:37.452936    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:38:37.466937    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:38:37.466947    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:38:37.479129    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:38:37.479139    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:38:37.493512    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:38:37.493521    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:38:37.504845    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:38:37.504857    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:38:37.529734    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:38:37.529742    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:38:37.566158    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:38:37.566165    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:38:37.577951    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:38:37.577961    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:38:40.978954    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:40.978987    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0728 18:38:41.364594    4787 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0728 18:38:41.368752    4787 out.go:177] * Enabled addons: storage-provisioner
	I0728 18:38:41.380719    4787 addons.go:510] duration metric: took 30.521652792s for enable addons: enabled=[storage-provisioner]
	I0728 18:38:40.094936    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:45.976673    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:45.976722    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:45.092877    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:45.093122    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:38:45.113360    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:38:45.113447    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:38:45.127824    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:38:45.127909    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:38:45.138956    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:38:45.139027    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:38:45.149932    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:38:45.149997    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:38:45.160748    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:38:45.160817    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:38:45.172430    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:38:45.172508    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:38:45.183253    4935 logs.go:276] 0 containers: []
	W0728 18:38:45.183267    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:38:45.183326    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:38:45.193064    4935 logs.go:276] 0 containers: []
	W0728 18:38:45.193076    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:38:45.193085    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:38:45.193090    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:38:45.197870    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:38:45.197877    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:38:45.232535    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:38:45.232548    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:38:45.246194    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:38:45.246203    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:38:45.271734    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:38:45.271746    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:38:45.289559    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:38:45.289568    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:38:45.328832    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:38:45.328842    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:38:45.347292    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:38:45.347302    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:38:45.363590    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:38:45.363599    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:38:45.387161    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:38:45.387169    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:38:45.398581    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:38:45.398592    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:38:45.415675    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:38:45.415686    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:38:45.433325    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:38:45.433339    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:38:45.444591    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:38:45.444603    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:38:45.456369    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:38:45.456380    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:38:47.971482    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:50.975690    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:50.975740    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:52.971621    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:52.971724    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:38:52.982981    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:38:52.983061    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:38:52.993295    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:38:52.993370    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:38:53.004192    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:38:53.004265    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:38:53.014647    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:38:53.014713    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:38:53.025073    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:38:53.025131    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:38:53.035327    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:38:53.035397    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:38:53.045570    4935 logs.go:276] 0 containers: []
	W0728 18:38:53.045586    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:38:53.045648    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:38:53.055601    4935 logs.go:276] 0 containers: []
	W0728 18:38:53.055613    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:38:53.055623    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:38:53.055629    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:38:53.095144    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:38:53.095156    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:38:53.108375    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:38:53.108385    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:38:53.133456    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:38:53.133468    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:38:53.148165    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:38:53.148176    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:38:53.160037    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:38:53.160051    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:38:53.178000    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:38:53.178012    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:38:53.182240    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:38:53.182246    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:38:53.196934    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:38:53.196945    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:38:53.208632    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:38:53.208643    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:38:53.220801    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:38:53.220812    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:38:53.244515    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:38:53.244523    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:38:53.281102    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:38:53.281112    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:38:53.295041    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:38:53.295053    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:38:53.314630    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:38:53.314643    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:38:55.976011    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:55.976078    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:55.832036    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:00.977100    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:00.977151    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:00.833034    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:00.833253    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:00.852565    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:39:00.852657    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:00.868068    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:39:00.868150    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:00.884768    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:39:00.884827    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:00.895740    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:39:00.895811    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:00.905821    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:39:00.905894    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:00.921448    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:39:00.921516    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:00.932210    4935 logs.go:276] 0 containers: []
	W0728 18:39:00.932221    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:00.932284    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:00.942414    4935 logs.go:276] 0 containers: []
	W0728 18:39:00.942426    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:39:00.942433    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:00.942439    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:00.981223    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:39:00.981238    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:39:00.992618    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:39:00.992634    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:39:01.008204    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:01.008214    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:01.046598    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:39:01.046606    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:39:01.060159    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:39:01.060174    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:39:01.074850    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:01.074859    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:01.098739    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:39:01.098747    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:39:01.112997    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:39:01.113007    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:39:01.132550    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:39:01.132559    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:39:01.144468    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:39:01.144478    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:39:01.162090    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:01.162104    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:01.167006    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:39:01.167014    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:39:01.196247    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:39:01.196262    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:39:01.211076    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:39:01.211090    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:03.730419    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:05.978440    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:05.978492    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:08.732017    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:08.732276    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:08.755877    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:39:08.756000    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:08.772322    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:39:08.772405    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:08.784607    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:39:08.784678    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:08.803228    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:39:08.803296    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:08.813860    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:39:08.813920    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:08.824334    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:39:08.824404    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:08.834237    4935 logs.go:276] 0 containers: []
	W0728 18:39:08.834248    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:08.834300    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:08.849233    4935 logs.go:276] 0 containers: []
	W0728 18:39:08.849245    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:39:08.849252    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:39:08.849257    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:39:08.863983    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:39:08.863996    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:39:08.875043    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:39:08.875053    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:39:08.887369    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:08.887382    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:08.931707    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:39:08.931721    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:39:08.957555    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:39:08.957566    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:39:08.969444    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:08.969453    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:08.994533    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:08.994542    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:09.030190    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:39:09.030201    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:39:09.043746    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:09.043762    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:09.047806    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:39:09.047815    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:39:09.065423    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:39:09.065436    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:39:09.080059    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:39:09.080073    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:39:09.097733    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:39:09.097743    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:39:09.111524    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:39:09.111538    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:10.979327    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:10.979486    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:11.013365    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:39:11.013441    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:11.035727    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:39:11.035797    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:11.046532    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:39:11.046606    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:11.057320    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:39:11.057397    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:11.068915    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:39:11.068987    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:11.080034    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:39:11.080104    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:11.090520    4787 logs.go:276] 0 containers: []
	W0728 18:39:11.090532    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:11.090590    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:11.101216    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:39:11.101230    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:39:11.101236    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:39:11.116207    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:39:11.116220    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:39:11.128242    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:39:11.128252    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:39:11.139876    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:39:11.139888    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:39:11.154738    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:39:11.154751    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:39:11.168006    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:39:11.168017    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:39:11.180343    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:11.180354    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:11.214247    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:11.214257    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:11.218725    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:11.218731    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:11.304251    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:39:11.304266    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:39:11.318674    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:39:11.318688    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:39:11.336430    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:11.336440    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:11.359986    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:39:11.359995    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:11.626478    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:13.881318    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:16.628248    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:16.628443    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:16.651647    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:39:16.651751    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:16.666183    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:39:16.666266    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:16.678368    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:39:16.678433    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:16.689508    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:39:16.689583    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:16.700341    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:39:16.700414    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:16.710805    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:39:16.710870    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:16.720612    4935 logs.go:276] 0 containers: []
	W0728 18:39:16.720622    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:16.720679    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:16.730729    4935 logs.go:276] 0 containers: []
	W0728 18:39:16.730739    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:39:16.730746    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:16.730752    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:16.769601    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:16.769609    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:16.773571    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:39:16.773577    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:39:16.787894    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:39:16.787904    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:39:16.800129    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:16.800139    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:16.825136    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:16.825143    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:16.859977    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:39:16.859988    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:39:16.875413    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:39:16.875424    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:39:16.892924    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:39:16.892935    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:39:16.912503    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:39:16.912517    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:39:16.933821    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:39:16.933834    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:16.948627    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:39:16.948638    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:39:16.978911    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:39:16.978921    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:39:16.992748    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:39:16.992758    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:39:17.003885    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:39:17.003898    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:39:18.883199    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:18.883367    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:18.898965    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:39:18.899046    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:18.913538    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:39:18.913612    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:18.924297    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:39:18.924352    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:18.934869    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:39:18.934938    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:18.945632    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:39:18.945700    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:18.956429    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:39:18.956496    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:18.967418    4787 logs.go:276] 0 containers: []
	W0728 18:39:18.967429    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:18.967479    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:18.979148    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:39:18.979163    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:18.979170    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:18.983473    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:18.983481    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:19.024639    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:39:19.024649    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:39:19.036268    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:39:19.036278    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:39:19.051474    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:19.051485    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:19.074871    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:39:19.074882    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:39:19.088503    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:39:19.088515    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:19.100406    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:19.100416    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:19.134776    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:39:19.134786    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:39:19.149076    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:39:19.149087    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:39:19.169974    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:39:19.169985    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:39:19.181560    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:39:19.181570    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:39:19.193255    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:39:19.193266    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:39:21.712573    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:19.519815    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:26.714884    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:26.715077    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:26.738589    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:39:26.738690    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:26.755693    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:39:26.755764    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:26.779288    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:39:26.779366    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:26.791111    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:39:26.791179    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:26.802266    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:39:26.802342    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:26.813472    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:39:26.813538    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:26.823924    4787 logs.go:276] 0 containers: []
	W0728 18:39:26.823938    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:26.823988    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:26.835308    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:39:26.835323    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:39:26.835329    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:26.847584    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:26.847596    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:26.883102    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:26.883111    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:26.887976    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:26.887984    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:26.924619    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:39:26.924631    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:39:26.937421    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:39:26.937434    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:39:26.951750    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:39:26.951764    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:39:26.966070    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:39:26.966084    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:39:26.982152    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:39:26.982166    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:39:26.999350    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:39:26.999366    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:39:27.015908    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:39:27.015924    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:39:27.037275    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:39:27.037288    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:39:27.050235    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:27.050249    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:24.521696    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:24.521883    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:24.538732    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:39:24.538815    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:24.551436    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:39:24.551511    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:24.569429    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:39:24.569485    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:24.579961    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:39:24.580032    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:24.590451    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:39:24.590512    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:24.601165    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:39:24.601224    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:24.611572    4935 logs.go:276] 0 containers: []
	W0728 18:39:24.611589    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:24.611639    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:24.621639    4935 logs.go:276] 0 containers: []
	W0728 18:39:24.621650    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:39:24.621658    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:24.621664    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:24.625770    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:39:24.625779    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:39:24.637226    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:24.637236    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:24.673280    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:39:24.673293    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:39:24.687598    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:39:24.687611    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:39:24.703934    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:24.703944    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:24.727506    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:39:24.727514    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:39:24.742457    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:39:24.742472    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:39:24.756434    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:39:24.756444    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:39:24.779945    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:39:24.779955    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:39:24.792226    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:39:24.792238    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:39:24.810538    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:39:24.810549    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:24.822401    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:24.822413    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:24.861387    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:39:24.861396    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:39:24.885575    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:39:24.885585    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:39:27.398429    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:29.577193    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:32.400511    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:32.400685    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:32.418383    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:39:32.418469    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:32.432246    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:39:32.432323    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:32.443363    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:39:32.443429    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:32.453910    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:39:32.453985    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:32.464401    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:39:32.464473    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:32.475572    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:39:32.475644    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:32.486224    4935 logs.go:276] 0 containers: []
	W0728 18:39:32.486235    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:32.486293    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:32.496222    4935 logs.go:276] 0 containers: []
	W0728 18:39:32.496236    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:39:32.496244    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:39:32.496250    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:39:32.521781    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:39:32.521794    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:39:32.540169    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:39:32.540179    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:39:32.552061    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:39:32.552071    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:39:32.566137    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:39:32.566146    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:32.578029    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:32.578040    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:32.582254    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:39:32.582261    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:39:32.596493    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:39:32.596504    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:39:32.612182    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:39:32.612193    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:39:32.626787    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:39:32.626801    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:39:32.651933    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:32.651943    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:32.677475    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:32.677486    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:32.716573    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:32.716582    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:32.775915    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:39:32.775925    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:39:32.790554    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:39:32.790569    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:39:34.579386    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:34.579726    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:34.615717    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:39:34.615847    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:34.635633    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:39:34.635730    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:34.651449    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:39:34.651529    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:34.667716    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:39:34.667790    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:34.684403    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:39:34.684473    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:34.695713    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:39:34.695786    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:34.707315    4787 logs.go:276] 0 containers: []
	W0728 18:39:34.707326    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:34.707379    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:34.718918    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:39:34.718932    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:39:34.718937    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:39:34.730826    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:39:34.730838    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:39:34.743625    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:39:34.743635    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:39:34.761710    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:39:34.761722    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:39:34.775578    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:34.775591    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:34.811924    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:34.811934    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:34.816717    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:39:34.816723    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:39:34.831052    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:39:34.831065    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:39:34.846795    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:34.846808    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:34.870259    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:39:34.870266    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:34.881948    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:34.881959    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:34.920872    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:39:34.920883    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:39:34.938570    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:39:34.938580    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:39:37.458356    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:35.303542    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:42.460473    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:42.460683    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:42.479244    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:39:42.479336    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:42.493442    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:39:42.493516    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:42.505165    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:39:42.505229    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:42.516202    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:39:42.516270    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:42.526999    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:39:42.527066    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:42.537853    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:39:42.537912    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:42.548345    4787 logs.go:276] 0 containers: []
	W0728 18:39:42.548366    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:42.548428    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:42.559637    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:39:42.559652    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:42.559658    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:42.583193    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:42.583202    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:42.588042    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:39:42.588049    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:39:42.602759    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:39:42.602771    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:39:42.614973    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:39:42.614983    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:39:42.633926    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:39:42.633936    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:39:42.649742    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:39:42.649752    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:39:42.663272    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:42.663286    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:42.697192    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:42.697201    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:42.735137    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:39:42.735148    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:39:42.749298    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:39:42.749312    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:39:42.761518    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:39:42.761532    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:39:42.779677    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:39:42.779688    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:40.305618    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:40.305839    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:40.324367    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:39:40.324456    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:40.337695    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:39:40.337770    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:40.349213    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:39:40.349284    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:40.359924    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:39:40.359993    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:40.370939    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:39:40.371012    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:40.381639    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:39:40.381710    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:40.392104    4935 logs.go:276] 0 containers: []
	W0728 18:39:40.392114    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:40.392173    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:40.402652    4935 logs.go:276] 0 containers: []
	W0728 18:39:40.402665    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:39:40.402673    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:39:40.402679    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:39:40.416664    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:39:40.416674    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:39:40.428395    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:40.428406    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:40.467743    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:40.467753    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:40.472226    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:39:40.472233    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:39:40.496790    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:39:40.496803    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:39:40.511522    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:39:40.511537    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:39:40.529127    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:39:40.529137    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:39:40.547324    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:39:40.547334    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:40.559139    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:40.559149    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:40.595251    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:39:40.595262    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:39:40.609668    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:39:40.609678    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:39:40.624667    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:39:40.624676    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:39:40.639180    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:39:40.639189    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:39:40.656260    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:40.656271    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:43.182323    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:45.293784    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:48.184543    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:48.184918    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:48.213021    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:39:48.213151    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:48.230639    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:39:48.230727    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:48.246501    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:39:48.246573    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:48.265438    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:39:48.265502    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:48.276306    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:39:48.276368    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:48.287389    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:39:48.287453    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:48.297879    4935 logs.go:276] 0 containers: []
	W0728 18:39:48.297892    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:48.297945    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:48.307900    4935 logs.go:276] 0 containers: []
	W0728 18:39:48.307912    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:39:48.307920    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:48.307927    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:48.343552    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:39:48.343563    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:39:48.365254    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:39:48.365263    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:39:48.382433    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:39:48.382449    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:39:48.396335    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:39:48.396349    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:39:48.414006    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:39:48.414019    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:39:48.425111    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:48.425120    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:48.448046    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:39:48.448053    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:48.461885    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:39:48.461896    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:39:48.473008    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:48.473017    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:48.512120    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:48.512133    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:48.516572    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:39:48.516579    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:39:48.530312    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:39:48.530321    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:39:48.555183    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:39:48.555192    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:39:48.566839    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:39:48.566853    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:39:50.295940    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:50.296117    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:50.311765    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:39:50.311849    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:50.324491    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:39:50.324562    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:50.334712    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:39:50.334782    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:50.345048    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:39:50.345114    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:50.355477    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:39:50.355553    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:50.369297    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:39:50.369368    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:50.379507    4787 logs.go:276] 0 containers: []
	W0728 18:39:50.379517    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:50.379572    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:50.389716    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:39:50.389729    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:50.389734    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:50.425031    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:39:50.425044    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:39:50.442672    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:39:50.442686    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:39:50.456695    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:39:50.456709    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:39:50.468117    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:39:50.468130    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:39:50.479451    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:39:50.479465    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:39:50.496546    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:50.496559    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:50.521026    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:50.521036    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:50.525456    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:50.525464    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:50.561185    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:39:50.561196    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:39:50.575804    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:39:50.575815    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:39:50.587581    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:39:50.587591    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:39:50.599488    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:39:50.599498    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:53.115246    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:51.083423    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:58.117364    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:58.117579    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:58.135681    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:39:58.135775    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:58.156447    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:39:58.156519    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:58.167902    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:39:58.167968    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:58.178640    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:39:58.178708    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:58.189390    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:39:58.189469    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:58.205272    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:39:58.205341    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:58.215825    4787 logs.go:276] 0 containers: []
	W0728 18:39:58.215836    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:58.215892    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:58.226064    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:39:58.226082    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:58.226087    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:58.230816    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:39:58.230824    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:39:58.246946    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:39:58.246960    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:39:58.259079    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:39:58.259093    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:39:58.270708    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:39:58.270721    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:39:58.288065    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:58.288075    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:58.323756    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:58.323762    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:58.363530    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:39:58.363541    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:39:58.378100    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:39:58.378111    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:39:58.392336    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:39:58.392348    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:39:58.403452    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:39:58.403462    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:39:58.418395    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:58.418406    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:58.442451    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:39:58.442459    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:56.084315    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:56.084644    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:56.123923    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:39:56.124072    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:56.145110    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:39:56.145205    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:56.160024    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:39:56.160104    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:56.172714    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:39:56.172790    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:56.183745    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:39:56.183811    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:56.195987    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:39:56.196061    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:56.206510    4935 logs.go:276] 0 containers: []
	W0728 18:39:56.206521    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:56.206578    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:56.216519    4935 logs.go:276] 0 containers: []
	W0728 18:39:56.216533    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:39:56.216541    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:56.216547    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:56.239828    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:39:56.239840    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:56.251550    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:39:56.251563    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:39:56.266129    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:39:56.266139    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:39:56.277615    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:39:56.277626    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:39:56.295349    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:39:56.295362    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:39:56.306841    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:56.306851    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:56.346145    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:56.346153    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:56.350291    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:39:56.350298    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:39:56.369001    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:56.369011    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:56.403804    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:39:56.403814    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:39:56.417762    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:39:56.417775    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:39:56.431469    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:39:56.431482    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:39:56.456345    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:39:56.456356    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:39:56.471606    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:39:56.471620    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:39:58.985989    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:00.956866    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:03.988269    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:03.988621    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:04.019049    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:40:04.019173    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:04.036527    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:40:04.036628    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:04.050245    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:40:04.050350    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:04.061992    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:40:04.062060    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:04.072761    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:40:04.072825    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:04.083900    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:40:04.083964    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:04.094885    4935 logs.go:276] 0 containers: []
	W0728 18:40:04.094903    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:04.094959    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:04.105165    4935 logs.go:276] 0 containers: []
	W0728 18:40:04.105175    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:40:04.105183    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:40:04.105190    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:40:05.959219    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:05.959496    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:05.988596    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:40:05.988707    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:06.006500    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:40:06.006589    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:06.019622    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:40:06.019685    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:06.035835    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:40:06.035904    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:06.046223    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:40:06.046294    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:06.056796    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:40:06.056858    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:06.067041    4787 logs.go:276] 0 containers: []
	W0728 18:40:06.067052    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:06.067106    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:06.077688    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:40:06.077704    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:40:06.077709    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:40:06.096425    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:40:06.096439    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:40:06.109742    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:40:06.109753    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:40:06.124466    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:40:06.124480    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:40:06.136246    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:40:06.136256    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:40:06.166041    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:40:06.166055    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:40:06.179778    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:06.179792    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:06.184216    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:06.184234    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:06.220189    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:06.220204    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:06.244278    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:40:06.244295    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:40:06.256328    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:40:06.256342    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:06.267713    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:06.267728    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:06.303682    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:40:06.303697    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:40:08.819417    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:04.119968    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:40:04.119978    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:40:04.131910    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:40:04.131922    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:04.143764    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:04.143775    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:04.181240    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:40:04.181260    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:40:04.196095    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:40:04.196108    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:40:04.211612    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:40:04.211625    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:40:04.223754    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:40:04.223766    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:40:04.243813    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:04.243827    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:04.268461    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:04.268468    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:04.304433    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:40:04.304445    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:40:04.330396    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:40:04.330407    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:40:04.343777    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:04.343787    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:04.348278    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:40:04.348286    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:40:04.362808    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:40:04.362819    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:40:06.876507    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:13.821714    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:13.821889    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:13.849752    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:40:13.849837    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:11.878718    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:11.878890    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:11.891184    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:40:11.891267    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:11.902019    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:40:11.902098    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:11.912478    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:40:11.912539    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:11.922838    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:40:11.922913    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:11.933319    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:40:11.933385    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:11.947258    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:40:11.947326    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:11.958735    4935 logs.go:276] 0 containers: []
	W0728 18:40:11.958746    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:11.958810    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:11.968745    4935 logs.go:276] 0 containers: []
	W0728 18:40:11.968756    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:40:11.968764    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:40:11.968770    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:11.980472    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:11.980486    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:12.019491    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:12.019500    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:12.053762    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:40:12.053775    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:40:12.079172    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:40:12.079183    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:40:12.094179    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:12.094187    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:12.119319    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:40:12.119328    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:40:12.134805    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:40:12.134816    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:40:12.146277    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:40:12.146289    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:40:12.158086    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:40:12.158096    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:40:12.175711    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:12.175725    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:12.180168    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:40:12.180174    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:40:12.194206    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:40:12.194220    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:40:12.208188    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:40:12.208201    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:40:12.223314    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:40:12.223323    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:40:13.862818    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:40:13.862894    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:13.873899    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:40:13.873962    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:13.883978    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:40:13.884054    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:13.894475    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:40:13.894544    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:13.904565    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:40:13.904630    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:13.914796    4787 logs.go:276] 0 containers: []
	W0728 18:40:13.914812    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:13.914873    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:13.925026    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:40:13.925047    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:13.925053    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:13.948400    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:13.948408    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:13.982106    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:40:13.982114    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:40:13.993672    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:40:13.993681    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:40:14.008190    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:40:14.008201    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:40:14.022078    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:40:14.022089    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:40:14.033586    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:40:14.033597    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:40:14.044838    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:40:14.044849    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:40:14.059970    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:40:14.059981    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:40:14.078188    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:14.078199    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:14.083124    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:14.083133    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:14.118875    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:40:14.118887    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:40:14.130605    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:40:14.130616    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:16.644652    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:14.741086    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:21.647167    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:21.647655    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:21.680072    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:40:21.680206    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:21.699363    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:40:21.699448    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:21.713009    4787 logs.go:276] 2 containers: [92d3c820798b 769aaacac2ed]
	I0728 18:40:21.713091    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:21.724504    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:40:21.724570    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:21.735707    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:40:21.735781    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:21.745957    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:40:21.746023    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:21.755892    4787 logs.go:276] 0 containers: []
	W0728 18:40:21.755908    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:21.755968    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:21.766338    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:40:21.766353    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:40:21.766358    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:40:21.778335    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:40:21.778347    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:21.790335    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:21.790346    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:21.795192    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:40:21.795201    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:40:21.809389    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:40:21.809400    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:40:21.820963    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:40:21.820976    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:40:21.833023    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:40:21.833037    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:40:21.844620    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:40:21.844634    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:40:21.861367    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:40:21.861377    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:40:21.878678    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:21.878690    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:21.902300    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:21.902312    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:21.937243    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:21.937250    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:21.973727    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:40:21.973738    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:40:19.743345    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:19.743571    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:19.766935    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:40:19.767055    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:19.784755    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:40:19.784838    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:19.799790    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:40:19.799863    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:19.811005    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:40:19.811081    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:19.821680    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:40:19.821753    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:19.832029    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:40:19.832094    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:19.842231    4935 logs.go:276] 0 containers: []
	W0728 18:40:19.842245    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:19.842310    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:19.852144    4935 logs.go:276] 0 containers: []
	W0728 18:40:19.852157    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:40:19.852166    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:19.852172    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:19.856627    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:40:19.856634    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:40:19.871232    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:40:19.871243    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:40:19.885476    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:40:19.885487    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:40:19.896392    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:40:19.896403    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:40:19.912202    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:19.912212    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:19.934628    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:40:19.934636    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:19.946132    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:19.946142    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:19.983856    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:40:19.983867    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:40:20.008543    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:40:20.008559    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:40:20.022776    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:40:20.022786    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:40:20.034425    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:20.034435    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:20.068930    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:40:20.068940    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:40:20.087604    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:40:20.087615    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:40:20.099543    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:40:20.099558    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:40:22.619508    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:24.490506    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:27.621773    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:27.621940    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:27.639410    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:40:27.639495    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:27.655136    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:40:27.655209    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:27.666934    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:40:27.666997    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:27.677366    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:40:27.677428    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:27.687689    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:40:27.687754    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:27.705645    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:40:27.705708    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:27.715633    4935 logs.go:276] 0 containers: []
	W0728 18:40:27.715646    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:27.715700    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:27.725382    4935 logs.go:276] 0 containers: []
	W0728 18:40:27.725393    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:40:27.725401    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:27.725406    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:27.729673    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:40:27.729680    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:40:27.743521    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:40:27.743533    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:40:27.762300    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:27.762310    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:27.784958    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:40:27.784965    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:40:27.810007    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:40:27.810018    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:40:27.823985    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:40:27.823995    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:40:27.835552    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:40:27.835564    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:40:27.852806    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:27.852816    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:27.888887    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:27.888894    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:27.923366    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:40:27.923379    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:40:27.936168    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:40:27.936180    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:40:27.951564    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:40:27.951575    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:40:27.963582    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:40:27.963596    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:40:27.976776    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:40:27.976789    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:29.491827    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:29.491923    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:29.504085    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:40:29.504156    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:29.518854    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:40:29.518936    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:29.529563    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:40:29.529635    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:29.539780    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:40:29.539845    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:29.550276    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:40:29.550352    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:29.561012    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:40:29.561076    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:29.571029    4787 logs.go:276] 0 containers: []
	W0728 18:40:29.571040    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:29.571094    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:29.582149    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:40:29.582166    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:40:29.582171    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:40:29.594559    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:40:29.594575    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:40:29.607119    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:40:29.607129    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:40:29.618830    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:29.618841    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:29.655294    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:29.655308    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:29.659823    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:40:29.659834    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:40:29.673480    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:40:29.673493    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:40:29.684844    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:40:29.684855    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:40:29.700041    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:40:29.700053    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:40:29.719742    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:29.719755    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:29.753668    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:40:29.753676    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:40:29.764837    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:40:29.764849    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:40:29.776599    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:29.776611    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:29.801874    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:40:29.801882    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:29.813859    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:40:29.813870    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:40:32.341367    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:30.490460    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:37.343745    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:37.343979    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:37.369107    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:40:37.369241    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:37.386709    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:40:37.386793    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:37.400994    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:40:37.401089    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:37.411691    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:40:37.411752    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:37.424467    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:40:37.424540    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:37.435321    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:40:37.435399    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:37.447312    4787 logs.go:276] 0 containers: []
	W0728 18:40:37.447328    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:37.447381    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:37.457962    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:40:37.457979    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:37.457985    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:37.463214    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:40:37.463220    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:40:37.477179    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:40:37.477189    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:40:37.488322    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:40:37.488333    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:40:37.500373    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:37.500385    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:37.534302    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:40:37.534320    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:40:37.547295    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:40:37.547309    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:40:37.562553    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:40:37.562564    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:40:37.586340    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:40:37.586351    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:37.598243    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:37.598257    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:37.634920    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:40:37.634934    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:40:37.648247    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:40:37.648265    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:40:37.660414    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:40:37.660424    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:40:37.674632    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:40:37.674642    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:40:37.686324    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:37.686336    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:35.492815    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:35.492975    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:35.509171    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:40:35.509258    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:35.521699    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:40:35.521835    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:35.532824    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:40:35.532895    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:35.551257    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:40:35.551335    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:35.562229    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:40:35.562301    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:35.577198    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:40:35.577265    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:35.587671    4935 logs.go:276] 0 containers: []
	W0728 18:40:35.587681    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:35.587738    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:35.597951    4935 logs.go:276] 0 containers: []
	W0728 18:40:35.597962    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:40:35.597970    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:40:35.597998    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:40:35.620194    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:40:35.620205    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:40:35.631914    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:40:35.631927    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:40:35.648627    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:40:35.648639    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:40:35.660652    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:40:35.660663    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:40:35.674975    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:40:35.674985    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:40:35.686582    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:40:35.686593    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:40:35.700814    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:35.700824    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:35.724087    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:35.724095    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:35.728285    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:40:35.728292    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:40:35.743130    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:40:35.743141    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:40:35.757318    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:35.757331    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:35.795832    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:40:35.795840    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:40:35.820238    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:40:35.820250    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:35.832108    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:35.832118    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:38.369465    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:40.213853    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:43.371782    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:43.371952    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:43.384282    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:40:43.384351    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:43.396464    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:40:43.396532    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:43.406718    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:40:43.406776    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:43.417254    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:40:43.417319    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:43.428082    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:40:43.428166    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:43.448145    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:40:43.448204    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:43.458468    4935 logs.go:276] 0 containers: []
	W0728 18:40:43.458480    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:43.458541    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:43.469676    4935 logs.go:276] 0 containers: []
	W0728 18:40:43.469687    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:40:43.469696    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:40:43.469702    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:40:43.491808    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:40:43.491818    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:40:43.517069    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:40:43.517082    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:40:43.530783    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:43.530794    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:43.555292    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:40:43.555303    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:43.567187    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:43.567198    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:43.571835    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:43.571842    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:43.606768    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:40:43.606781    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:40:43.618683    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:40:43.618696    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:40:43.641411    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:40:43.641421    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:40:43.656866    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:43.656876    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:43.695048    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:40:43.695056    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:40:43.706221    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:40:43.706231    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:40:43.721851    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:40:43.721865    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:40:43.740218    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:40:43.740231    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:40:45.216007    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:45.216258    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:45.237098    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:40:45.237214    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:45.251769    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:40:45.251846    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:45.264114    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:40:45.264186    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:45.275458    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:40:45.275529    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:45.286252    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:40:45.286329    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:45.297130    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:40:45.297204    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:45.306967    4787 logs.go:276] 0 containers: []
	W0728 18:40:45.306979    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:45.307033    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:45.317474    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:40:45.317491    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:45.317496    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:45.341463    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:40:45.341473    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:40:45.355144    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:40:45.355155    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:40:45.366313    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:40:45.366324    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:40:45.377361    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:40:45.377372    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:45.389293    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:45.389304    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:45.425261    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:40:45.425271    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:40:45.437477    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:40:45.437491    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:40:45.449919    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:40:45.449931    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:40:45.464268    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:40:45.464280    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:40:45.476853    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:40:45.476867    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:40:45.494443    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:40:45.494456    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:40:45.509811    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:45.509823    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:45.546164    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:45.546173    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:45.550651    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:40:45.550659    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:40:48.064365    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:46.256512    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:53.066729    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:53.067139    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:53.107482    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:40:53.107611    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:53.128646    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:40:53.128752    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:53.143211    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:40:53.143289    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:53.158107    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:40:53.158171    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:53.168975    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:40:53.169046    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:53.179495    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:40:53.179562    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:53.190191    4787 logs.go:276] 0 containers: []
	W0728 18:40:53.190205    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:53.190267    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:53.201654    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:40:53.201670    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:40:53.201677    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:40:53.213648    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:53.213659    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:53.250099    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:40:53.250113    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:40:53.263477    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:40:53.263490    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:40:53.278636    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:40:53.278648    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:40:53.290590    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:53.290601    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:53.295024    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:40:53.295032    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:40:53.310177    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:40:53.310186    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:40:53.323041    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:53.323054    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:53.348111    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:40:53.348120    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:40:53.359714    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:40:53.359725    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:53.371400    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:53.371410    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:53.405169    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:40:53.405178    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:40:53.419287    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:40:53.419298    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:40:53.434552    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:40:53.434562    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:40:51.258931    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:51.259140    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:51.282968    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:40:51.283089    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:51.298979    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:40:51.299059    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:51.311570    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:40:51.311648    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:51.322394    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:40:51.322461    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:51.332427    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:40:51.332499    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:51.344828    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:40:51.344902    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:51.355164    4935 logs.go:276] 0 containers: []
	W0728 18:40:51.355173    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:51.355228    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:51.365203    4935 logs.go:276] 0 containers: []
	W0728 18:40:51.365215    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:40:51.365223    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:40:51.365229    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:40:51.376216    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:51.376231    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:51.400420    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:51.400431    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:51.404831    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:40:51.404837    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:40:51.430333    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:40:51.430355    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:40:51.448519    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:51.448533    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:51.486410    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:40:51.486426    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:51.499196    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:40:51.499208    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:40:51.513299    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:40:51.513311    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:40:51.527062    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:40:51.527072    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:40:51.551162    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:40:51.551174    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:40:51.571986    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:40:51.572000    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:40:51.585949    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:51.585960    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:51.622912    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:40:51.622923    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:40:51.637202    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:40:51.637213    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:40:55.960421    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:54.155101    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:00.961730    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:00.962049    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:00.982690    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:41:00.982776    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:00.997471    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:41:00.997543    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:01.009219    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:41:01.009279    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:01.019974    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:41:01.020043    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:01.029970    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:41:01.030037    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:01.040549    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:41:01.040611    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:01.051264    4787 logs.go:276] 0 containers: []
	W0728 18:41:01.051278    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:01.051329    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:01.064970    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:41:01.064988    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:41:01.064993    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:41:01.083182    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:41:01.083192    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:41:01.100749    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:41:01.100759    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:41:01.112619    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:41:01.112629    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:01.128364    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:41:01.128375    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:41:01.141859    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:41:01.141869    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:41:01.156943    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:41:01.156954    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:41:01.168998    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:41:01.169009    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:41:01.183870    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:01.183880    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:01.208545    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:01.208553    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:01.243300    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:01.243309    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:01.279797    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:41:01.279813    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:41:01.291382    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:01.291399    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:01.295912    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:41:01.295920    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:41:01.307704    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:41:01.307714    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:41:03.823277    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:59.157314    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:59.157421    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:59.168273    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:40:59.168352    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:59.179244    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:40:59.179315    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:59.191455    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:40:59.191529    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:59.202575    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:40:59.202652    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:59.213323    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:40:59.213394    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:59.223714    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:40:59.223782    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:59.234074    4935 logs.go:276] 0 containers: []
	W0728 18:40:59.234086    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:59.234137    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:59.244663    4935 logs.go:276] 0 containers: []
	W0728 18:40:59.244672    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:40:59.244681    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:59.244686    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:59.280072    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:40:59.280083    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:40:59.292815    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:59.292825    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:59.316188    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:40:59.316195    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:40:59.330099    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:40:59.330109    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:40:59.342123    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:40:59.342134    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:59.354226    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:40:59.354236    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:40:59.370048    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:40:59.370059    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:40:59.388540    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:40:59.388550    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:40:59.405817    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:40:59.405827    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:40:59.420295    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:40:59.420305    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:40:59.435398    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:40:59.435408    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:40:59.449498    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:59.449508    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:59.488352    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:59.488360    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:59.492271    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:40:59.492278    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:41:02.018806    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:08.825505    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:08.825659    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:08.839720    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:41:08.839805    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:08.851892    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:41:08.851974    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:07.021160    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:07.021350    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:07.041182    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:41:07.041275    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:07.055317    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:41:07.055394    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:07.067572    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:41:07.067643    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:07.077974    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:41:07.078038    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:07.088307    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:41:07.088365    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:07.101467    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:41:07.101551    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:07.111424    4935 logs.go:276] 0 containers: []
	W0728 18:41:07.111442    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:07.111519    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:07.126281    4935 logs.go:276] 0 containers: []
	W0728 18:41:07.126296    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:41:07.126304    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:41:07.126310    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:41:07.137819    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:07.137829    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:07.160661    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:41:07.160674    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:07.173313    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:41:07.173324    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:41:07.185416    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:07.185426    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:07.189965    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:07.189971    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:07.224345    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:41:07.224354    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:41:07.238503    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:41:07.238513    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:41:07.250622    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:07.250635    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:07.290621    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:41:07.290633    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:41:07.305245    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:41:07.305258    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:41:07.322769    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:41:07.322779    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:41:07.352233    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:41:07.352243    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:41:07.367018    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:41:07.367031    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:41:07.381662    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:41:07.381671    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:41:08.862897    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:41:08.862970    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:08.873707    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:41:08.873772    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:08.884805    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:41:08.884880    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:08.895171    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:41:08.895240    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:08.905540    4787 logs.go:276] 0 containers: []
	W0728 18:41:08.905549    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:08.905603    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:08.916538    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:41:08.916554    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:08.916559    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:08.921136    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:41:08.921146    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:41:08.932432    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:41:08.932442    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:41:08.943897    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:41:08.943910    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:41:08.954932    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:08.954944    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:09.005572    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:09.005583    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:09.031513    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:41:09.031524    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:41:09.042841    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:41:09.042852    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:41:09.058137    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:41:09.058149    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:09.069934    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:09.069948    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:09.104251    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:41:09.104261    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:41:09.119699    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:41:09.119712    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:41:09.139660    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:41:09.139669    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:41:09.151533    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:41:09.151545    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:41:09.164247    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:41:09.164256    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:41:11.684266    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:09.899585    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:16.686455    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:16.686600    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:16.708290    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:41:16.708364    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:16.723044    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:41:16.723109    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:16.733807    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:41:16.733882    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:16.747107    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:41:16.747176    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:16.757436    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:41:16.757498    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:16.767633    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:41:16.767705    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:16.778105    4787 logs.go:276] 0 containers: []
	W0728 18:41:16.778118    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:16.778178    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:16.789199    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:41:16.789215    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:41:16.789220    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:41:16.800787    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:41:16.800797    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:41:16.812570    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:41:16.812584    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:41:16.826710    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:41:16.826723    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:41:16.838331    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:41:16.838341    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:41:16.853170    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:41:16.853183    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:41:16.864284    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:41:16.864295    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:41:16.881494    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:16.881507    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:16.907370    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:41:16.907382    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:16.918868    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:16.918878    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:16.954758    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:16.954768    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:16.959343    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:16.959349    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:16.994527    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:41:16.994537    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:41:17.008838    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:41:17.008847    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:41:17.020570    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:41:17.020584    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:41:14.902000    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:14.902253    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:14.928770    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:41:14.928877    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:14.946961    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:41:14.947054    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:14.961299    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:41:14.961377    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:14.972856    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:41:14.972926    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:14.983253    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:41:14.983314    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:14.993571    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:41:14.993630    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:15.004408    4935 logs.go:276] 0 containers: []
	W0728 18:41:15.004420    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:15.004479    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:15.014579    4935 logs.go:276] 0 containers: []
	W0728 18:41:15.014590    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:41:15.014598    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:41:15.014602    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:41:15.026359    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:15.026372    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:15.050669    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:41:15.050678    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:41:15.075274    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:41:15.075286    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:41:15.089482    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:41:15.089493    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:41:15.101284    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:41:15.101299    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:41:15.116228    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:41:15.116239    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:15.127464    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:41:15.127476    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:41:15.141371    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:41:15.141382    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:41:15.156079    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:15.156092    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:15.190616    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:41:15.190627    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:41:15.204890    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:41:15.204900    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:41:15.222186    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:41:15.222196    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:41:15.242381    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:15.242393    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:15.280896    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:15.280905    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:17.785971    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:19.537874    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:22.786507    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:22.786728    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:22.809779    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:41:22.809906    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:22.825097    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:41:22.825172    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:22.837951    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:41:22.838014    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:22.848517    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:41:22.848595    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:22.858647    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:41:22.858715    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:22.869540    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:41:22.869607    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:22.879736    4935 logs.go:276] 0 containers: []
	W0728 18:41:22.879746    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:22.879798    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:22.889919    4935 logs.go:276] 0 containers: []
	W0728 18:41:22.889931    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:41:22.889939    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:41:22.889947    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:22.902164    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:22.902174    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:22.906785    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:41:22.906791    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:41:22.918471    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:41:22.918482    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:41:22.933409    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:41:22.933419    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:41:22.947348    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:22.947359    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:22.986114    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:41:22.986121    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:41:23.000110    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:41:23.000121    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:41:23.011120    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:23.011132    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:23.035813    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:41:23.035824    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:41:23.049481    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:41:23.049492    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:41:23.074331    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:41:23.074342    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:41:23.085998    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:41:23.086010    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:41:23.103396    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:23.103407    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:23.137782    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:41:23.137794    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:41:24.540217    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:24.540402    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:24.559587    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:41:24.559682    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:24.573869    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:41:24.573943    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:24.585939    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:41:24.586005    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:24.597013    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:41:24.597069    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:24.611674    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:41:24.611731    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:24.622030    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:41:24.622089    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:24.631807    4787 logs.go:276] 0 containers: []
	W0728 18:41:24.631818    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:24.631868    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:24.642640    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:41:24.642659    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:41:24.642665    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:41:24.663308    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:41:24.663320    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:41:24.675093    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:24.675104    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:24.701103    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:41:24.701121    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:41:24.714334    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:41:24.714352    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:24.729497    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:41:24.729513    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:41:24.753417    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:41:24.753431    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:41:24.765337    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:41:24.765348    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:41:24.776834    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:41:24.776847    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:41:24.788378    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:41:24.788389    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:41:24.806845    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:41:24.806855    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:41:24.818287    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:41:24.818301    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:41:24.833216    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:24.833226    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:24.867580    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:24.867595    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:24.871805    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:24.871816    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:27.407771    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:25.652649    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:30.653197    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:30.653267    4935 kubeadm.go:597] duration metric: took 4m3.05908775s to restartPrimaryControlPlane
	W0728 18:41:30.653321    4935 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0728 18:41:30.653342    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0728 18:41:31.617561    4935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:41:31.622782    4935 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 18:41:31.626019    4935 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 18:41:31.628938    4935 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 18:41:31.628945    4935 kubeadm.go:157] found existing configuration files:
	
	I0728 18:41:31.628969    4935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/admin.conf
	I0728 18:41:31.631384    4935 kubeadm.go:163] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0728 18:41:31.631405    4935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0728 18:41:31.634125    4935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/kubelet.conf
	I0728 18:41:31.637308    4935 kubeadm.go:163] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0728 18:41:31.637332    4935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0728 18:41:31.639942    4935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/controller-manager.conf
	I0728 18:41:31.642534    4935 kubeadm.go:163] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0728 18:41:31.642556    4935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 18:41:31.645905    4935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/scheduler.conf
	I0728 18:41:31.648857    4935 kubeadm.go:163] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0728 18:41:31.648881    4935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 18:41:31.651447    4935 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0728 18:41:31.667056    4935 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0728 18:41:31.667086    4935 kubeadm.go:310] [preflight] Running pre-flight checks
	I0728 18:41:31.723109    4935 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0728 18:41:31.723202    4935 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0728 18:41:31.723268    4935 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0728 18:41:31.772672    4935 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0728 18:41:31.780855    4935 out.go:204]   - Generating certificates and keys ...
	I0728 18:41:31.780886    4935 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0728 18:41:31.780916    4935 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0728 18:41:31.780949    4935 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0728 18:41:31.780974    4935 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0728 18:41:31.781002    4935 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0728 18:41:31.781025    4935 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0728 18:41:31.781051    4935 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0728 18:41:31.781076    4935 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0728 18:41:31.781106    4935 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0728 18:41:31.781136    4935 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0728 18:41:31.781151    4935 kubeadm.go:310] [certs] Using the existing "sa" key
	I0728 18:41:31.781173    4935 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0728 18:41:31.812066    4935 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0728 18:41:31.997348    4935 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0728 18:41:32.052177    4935 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0728 18:41:32.133598    4935 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0728 18:41:32.162721    4935 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0728 18:41:32.162769    4935 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0728 18:41:32.162790    4935 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0728 18:41:32.230057    4935 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0728 18:41:32.410064    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:32.410161    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:32.422052    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:41:32.422118    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:32.433448    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:41:32.433520    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:32.444121    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:41:32.444218    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:32.456882    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:41:32.456949    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:32.468594    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:41:32.468662    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:32.481567    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:41:32.481632    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:32.492011    4787 logs.go:276] 0 containers: []
	W0728 18:41:32.492020    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:32.492072    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:32.502438    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:41:32.502454    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:32.502459    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:32.507164    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:41:32.507170    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:41:32.521841    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:32.521850    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:32.555693    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:41:32.555708    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:41:32.567311    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:41:32.567321    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:32.581855    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:41:32.581869    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:41:32.595091    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:32.595106    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:32.630515    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:41:32.630530    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:41:32.645193    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:41:32.645204    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:41:32.657291    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:41:32.657304    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:41:32.676853    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:41:32.676866    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:41:32.688119    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:32.688132    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:32.712489    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:41:32.712496    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:41:32.728403    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:41:32.728413    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:41:32.740289    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:41:32.740304    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:41:32.234291    4935 out.go:204]   - Booting up control plane ...
	I0728 18:41:32.234340    4935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0728 18:41:32.234380    4935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0728 18:41:32.234436    4935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0728 18:41:32.234479    4935 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0728 18:41:32.234648    4935 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0728 18:41:36.233639    4935 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001079 seconds
	I0728 18:41:36.233716    4935 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0728 18:41:36.239490    4935 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0728 18:41:36.754069    4935 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0728 18:41:36.754264    4935 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-278000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0728 18:41:37.257871    4935 kubeadm.go:310] [bootstrap-token] Using token: yanhle.k7yavktbovzn0uxp
	I0728 18:41:37.261038    4935 out.go:204]   - Configuring RBAC rules ...
	I0728 18:41:37.261105    4935 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0728 18:41:37.261158    4935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0728 18:41:37.267999    4935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0728 18:41:37.268958    4935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0728 18:41:37.269724    4935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0728 18:41:37.270675    4935 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0728 18:41:37.273756    4935 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0728 18:41:37.429875    4935 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0728 18:41:37.661849    4935 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0728 18:41:37.662341    4935 kubeadm.go:310] 
	I0728 18:41:37.662370    4935 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0728 18:41:37.662376    4935 kubeadm.go:310] 
	I0728 18:41:37.662420    4935 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0728 18:41:37.662428    4935 kubeadm.go:310] 
	I0728 18:41:37.662443    4935 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0728 18:41:37.662472    4935 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0728 18:41:37.662501    4935 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0728 18:41:37.662504    4935 kubeadm.go:310] 
	I0728 18:41:37.662541    4935 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0728 18:41:37.662545    4935 kubeadm.go:310] 
	I0728 18:41:37.662586    4935 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0728 18:41:37.662592    4935 kubeadm.go:310] 
	I0728 18:41:37.662641    4935 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0728 18:41:37.662679    4935 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0728 18:41:37.662719    4935 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0728 18:41:37.662725    4935 kubeadm.go:310] 
	I0728 18:41:37.662774    4935 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0728 18:41:37.662822    4935 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0728 18:41:37.662826    4935 kubeadm.go:310] 
	I0728 18:41:37.662877    4935 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yanhle.k7yavktbovzn0uxp \
	I0728 18:41:37.662939    4935 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c4c1501be84d6e769376a12e79a88eb62c7fa74cf7059e57b30ba292796da81b \
	I0728 18:41:37.662951    4935 kubeadm.go:310] 	--control-plane 
	I0728 18:41:37.662957    4935 kubeadm.go:310] 
	I0728 18:41:37.662995    4935 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0728 18:41:37.662999    4935 kubeadm.go:310] 
	I0728 18:41:37.663036    4935 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yanhle.k7yavktbovzn0uxp \
	I0728 18:41:37.663103    4935 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c4c1501be84d6e769376a12e79a88eb62c7fa74cf7059e57b30ba292796da81b 
	I0728 18:41:37.663174    4935 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0728 18:41:37.663224    4935 cni.go:84] Creating CNI manager for ""
	I0728 18:41:37.663232    4935 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:41:37.665875    4935 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0728 18:41:37.672871    4935 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0728 18:41:37.675688    4935 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0728 18:41:37.680260    4935 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 18:41:37.680298    4935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:41:37.680323    4935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-278000 minikube.k8s.io/updated_at=2024_07_28T18_41_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=stopped-upgrade-278000 minikube.k8s.io/primary=true
	I0728 18:41:37.721350    4935 ops.go:34] apiserver oom_adj: -16
	I0728 18:41:37.721342    4935 kubeadm.go:1113] duration metric: took 41.074541ms to wait for elevateKubeSystemPrivileges
	I0728 18:41:37.721449    4935 kubeadm.go:394] duration metric: took 4m10.140951708s to StartCluster
	I0728 18:41:37.721461    4935 settings.go:142] acquiring lock: {Name:mk87b264018a6cee2b66b065d01a79c5a5adf3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:41:37.721557    4935 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:41:37.721961    4935 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/kubeconfig: {Name:mk193de249a2c701b098e889c731f2b64761e39f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:41:37.722445    4935 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:41:37.722546    4935 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:41:37.722530    4935 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0728 18:41:37.722564    4935 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-278000"
	I0728 18:41:37.722577    4935 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-278000"
	W0728 18:41:37.722580    4935 addons.go:243] addon storage-provisioner should already be in state true
	I0728 18:41:37.722591    4935 host.go:66] Checking if "stopped-upgrade-278000" exists ...
	I0728 18:41:37.722598    4935 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-278000"
	I0728 18:41:37.722611    4935 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-278000"
	I0728 18:41:37.722862    4935 retry.go:31] will retry after 950.628276ms: connect: dial unix /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/monitor: connect: connection refused
	I0728 18:41:37.723670    4935 kapi.go:59] client config for stopped-upgrade-278000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023945c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:41:37.723799    4935 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-278000"
	W0728 18:41:37.723803    4935 addons.go:243] addon default-storageclass should already be in state true
	I0728 18:41:37.723811    4935 host.go:66] Checking if "stopped-upgrade-278000" exists ...
	I0728 18:41:37.724347    4935 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 18:41:37.724351    4935 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 18:41:37.724357    4935 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/id_rsa Username:docker}
	I0728 18:41:37.726801    4935 out.go:177] * Verifying Kubernetes components...
	I0728 18:41:37.734826    4935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:41:37.813930    4935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:41:37.819104    4935 api_server.go:52] waiting for apiserver process to appear ...
	I0728 18:41:37.819150    4935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:41:37.820877    4935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 18:41:37.824760    4935 api_server.go:72] duration metric: took 102.300584ms to wait for apiserver process to appear ...
	I0728 18:41:37.824773    4935 api_server.go:88] waiting for apiserver healthz status ...
	I0728 18:41:37.824783    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:38.680297    4935 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:41:35.258451    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:38.684349    4935 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 18:41:38.684358    4935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 18:41:38.684370    4935 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/id_rsa Username:docker}
	I0728 18:41:38.715457    4935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 18:41:40.260611    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:40.260819    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:40.272760    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:41:40.272836    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:40.283360    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:41:40.283433    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:40.293715    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:41:40.293785    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:40.304279    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:41:40.304344    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:40.314927    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:41:40.314998    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:40.325702    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:41:40.325768    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:40.336803    4787 logs.go:276] 0 containers: []
	W0728 18:41:40.336819    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:40.336872    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:40.347390    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:41:40.347406    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:40.347411    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:40.351923    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:40.351932    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:40.392308    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:41:40.392323    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:41:40.406969    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:41:40.406979    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:41:40.418325    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:41:40.418335    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:41:40.435626    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:40.435640    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:40.459340    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:41:40.459353    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:40.470835    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:41:40.470852    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:41:40.485315    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:41:40.485324    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:41:40.496903    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:41:40.496916    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:41:40.508853    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:41:40.508864    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:41:40.520628    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:41:40.520643    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:41:40.532813    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:40.532824    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:40.567523    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:41:40.567533    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:41:40.582471    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:41:40.582482    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:41:43.096621    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:42.826833    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:42.826886    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:48.098827    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:48.098923    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:48.110024    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:41:48.110090    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:48.121104    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:41:48.121173    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:48.132967    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:41:48.133041    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:48.146675    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:41:48.146744    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:48.158935    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:41:48.159004    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:48.170272    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:41:48.170340    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:48.184732    4787 logs.go:276] 0 containers: []
	W0728 18:41:48.184745    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:48.184804    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:48.195310    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:41:48.195330    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:41:48.195335    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:41:48.207392    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:41:48.207404    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:41:48.234245    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:41:48.234258    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:48.246479    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:41:48.246489    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:41:48.262303    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:41:48.262317    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:41:48.274838    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:41:48.274849    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:41:48.286331    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:48.286342    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:48.311267    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:48.311276    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:48.347500    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:41:48.347515    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:41:48.370642    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:41:48.370654    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:41:48.382990    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:48.383001    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:48.418444    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:48.418454    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:48.423362    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:41:48.423368    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:41:48.437481    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:41:48.437492    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:41:48.454650    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:41:48.454662    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:41:47.827265    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:47.827289    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:50.968936    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:52.827971    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:52.827998    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:55.971047    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:55.971248    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:55.985652    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:41:55.985730    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:55.997440    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:41:55.997515    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:56.008046    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:41:56.008120    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:56.018990    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:41:56.019063    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:56.030392    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:41:56.030460    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:56.041197    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:41:56.041261    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:56.051248    4787 logs.go:276] 0 containers: []
	W0728 18:41:56.051259    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:56.051316    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:56.064388    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:41:56.064404    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:41:56.064410    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:56.075913    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:56.075928    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:56.080394    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:41:56.080402    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:41:56.094840    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:41:56.094852    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:41:56.116608    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:41:56.116618    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:41:56.128074    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:41:56.128084    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:41:56.142877    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:41:56.142887    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:41:56.157540    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:56.157552    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:56.195382    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:41:56.195395    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:41:56.207470    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:41:56.207485    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:41:56.219597    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:41:56.219609    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:41:56.237432    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:41:56.237447    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:41:56.253512    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:41:56.253524    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:41:56.265255    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:56.265266    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:56.288796    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:56.288805    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:58.826291    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:57.828485    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:57.828528    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:42:03.826623    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:03.826736    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:42:03.837866    4787 logs.go:276] 1 containers: [45ee255a36d7]
	I0728 18:42:03.837937    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:42:03.848309    4787 logs.go:276] 1 containers: [c7a431a27ba3]
	I0728 18:42:03.848386    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:42:02.829208    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:02.829248    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:42:07.830154    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:07.830178    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0728 18:42:08.155442    4935 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0728 18:42:08.162672    4935 out.go:177] * Enabled addons: storage-provisioner
	I0728 18:42:03.869063    4787 logs.go:276] 4 containers: [b1509271f90d 824cc518b6a9 92d3c820798b 769aaacac2ed]
	I0728 18:42:03.869137    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:42:03.879869    4787 logs.go:276] 1 containers: [8f1e7bd03878]
	I0728 18:42:03.879936    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:42:03.890573    4787 logs.go:276] 1 containers: [571bf51b1ca1]
	I0728 18:42:03.890644    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:42:03.901277    4787 logs.go:276] 1 containers: [bb655071eb1c]
	I0728 18:42:03.901347    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:42:03.911286    4787 logs.go:276] 0 containers: []
	W0728 18:42:03.911301    4787 logs.go:278] No container was found matching "kindnet"
	I0728 18:42:03.911363    4787 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:42:03.921769    4787 logs.go:276] 1 containers: [1fcc3a645680]
	I0728 18:42:03.921787    4787 logs.go:123] Gathering logs for kube-apiserver [45ee255a36d7] ...
	I0728 18:42:03.921792    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ee255a36d7"
	I0728 18:42:03.937351    4787 logs.go:123] Gathering logs for coredns [824cc518b6a9] ...
	I0728 18:42:03.937361    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824cc518b6a9"
	I0728 18:42:03.948995    4787 logs.go:123] Gathering logs for coredns [769aaacac2ed] ...
	I0728 18:42:03.949006    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 769aaacac2ed"
	I0728 18:42:03.960786    4787 logs.go:123] Gathering logs for kubelet ...
	I0728 18:42:03.960797    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:42:03.996123    4787 logs.go:123] Gathering logs for dmesg ...
	I0728 18:42:03.996131    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:42:04.000491    4787 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:42:04.000498    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:42:04.037657    4787 logs.go:123] Gathering logs for coredns [b1509271f90d] ...
	I0728 18:42:04.037668    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1509271f90d"
	I0728 18:42:04.050106    4787 logs.go:123] Gathering logs for kube-proxy [571bf51b1ca1] ...
	I0728 18:42:04.050120    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571bf51b1ca1"
	I0728 18:42:04.062325    4787 logs.go:123] Gathering logs for storage-provisioner [1fcc3a645680] ...
	I0728 18:42:04.062336    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcc3a645680"
	I0728 18:42:04.073466    4787 logs.go:123] Gathering logs for Docker ...
	I0728 18:42:04.073478    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:42:04.096810    4787 logs.go:123] Gathering logs for etcd [c7a431a27ba3] ...
	I0728 18:42:04.096820    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7a431a27ba3"
	I0728 18:42:04.111015    4787 logs.go:123] Gathering logs for kube-scheduler [8f1e7bd03878] ...
	I0728 18:42:04.111040    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e7bd03878"
	I0728 18:42:04.127131    4787 logs.go:123] Gathering logs for container status ...
	I0728 18:42:04.127143    4787 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:42:04.138946    4787 logs.go:123] Gathering logs for coredns [92d3c820798b] ...
	I0728 18:42:04.138958    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3c820798b"
	I0728 18:42:04.151550    4787 logs.go:123] Gathering logs for kube-controller-manager [bb655071eb1c] ...
	I0728 18:42:04.151561    4787 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb655071eb1c"
	I0728 18:42:06.672417    4787 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:42:08.168595    4935 addons.go:510] duration metric: took 30.446427542s for enable addons: enabled=[storage-provisioner]
	I0728 18:42:11.674158    4787 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:11.678296    4787 out.go:177] 
	W0728 18:42:11.682298    4787 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0728 18:42:11.682316    4787 out.go:239] * 
	W0728 18:42:11.683566    4787 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:42:11.698091    4787 out.go:177] 
	I0728 18:42:12.831229    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:12.831270    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:42:17.832654    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:17.832679    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:42:22.834423    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:22.834445    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-07-29 01:33:21 UTC, ends at Mon 2024-07-29 01:42:27 UTC. --
	Jul 29 01:42:14 running-upgrade-638000 cri-dockerd[3061]: time="2024-07-29T01:42:14Z" level=error msg="ContainerStats resp: {0x40004f3700 linux}"
	Jul 29 01:42:14 running-upgrade-638000 cri-dockerd[3061]: time="2024-07-29T01:42:14Z" level=error msg="ContainerStats resp: {0x4000942180 linux}"
	Jul 29 01:42:14 running-upgrade-638000 cri-dockerd[3061]: time="2024-07-29T01:42:14Z" level=error msg="ContainerStats resp: {0x4000a0a2c0 linux}"
	Jul 29 01:42:14 running-upgrade-638000 cri-dockerd[3061]: time="2024-07-29T01:42:14Z" level=error msg="ContainerStats resp: {0x4000942a80 linux}"
	Jul 29 01:42:14 running-upgrade-638000 dockerd[3219]: time="2024-07-29T01:42:14.173225010Z" level=info msg="shim disconnected" id=824cc518b6a9d6080797b83b6597ce2a522f195fd211b9e1bd42bb71bb5d6d17
	Jul 29 01:42:14 running-upgrade-638000 dockerd[3213]: time="2024-07-29T01:42:14.173366006Z" level=info msg="ignoring event" container=824cc518b6a9d6080797b83b6597ce2a522f195fd211b9e1bd42bb71bb5d6d17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 01:42:14 running-upgrade-638000 dockerd[3219]: time="2024-07-29T01:42:14.173599874Z" level=warning msg="cleaning up after shim disconnected" id=824cc518b6a9d6080797b83b6597ce2a522f195fd211b9e1bd42bb71bb5d6d17 namespace=moby
	Jul 29 01:42:14 running-upgrade-638000 dockerd[3219]: time="2024-07-29T01:42:14.173610958Z" level=info msg="cleaning up dead shim"
	Jul 29 01:42:14 running-upgrade-638000 dockerd[3219]: time="2024-07-29T01:42:14.177017491Z" level=warning msg="cleanup warnings time=\"2024-07-29T01:42:14Z\" level=info msg=\"starting signal loop\" namespace=moby pid=18885 runtime=io.containerd.runc.v2\n"
	Jul 29 01:42:14 running-upgrade-638000 dockerd[3219]: time="2024-07-29T01:42:14.207309006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 01:42:14 running-upgrade-638000 dockerd[3219]: time="2024-07-29T01:42:14.207336172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 01:42:14 running-upgrade-638000 dockerd[3219]: time="2024-07-29T01:42:14.207342130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 01:42:14 running-upgrade-638000 dockerd[3219]: time="2024-07-29T01:42:14.207427669Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/378e0862dbc270c0763fa958f57b71fadc0cc5eb531d02c607e906a37a364910 pid=18908 runtime=io.containerd.runc.v2
	Jul 29 01:42:18 running-upgrade-638000 cri-dockerd[3061]: time="2024-07-29T01:42:18Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 01:42:24 running-upgrade-638000 cri-dockerd[3061]: time="2024-07-29T01:42:24Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 01:42:24 running-upgrade-638000 cri-dockerd[3061]: time="2024-07-29T01:42:24Z" level=error msg="ContainerStats resp: {0x400050a840 linux}"
	Jul 29 01:42:24 running-upgrade-638000 cri-dockerd[3061]: time="2024-07-29T01:42:24Z" level=error msg="ContainerStats resp: {0x400060bd00 linux}"
	Jul 29 01:42:25 running-upgrade-638000 cri-dockerd[3061]: time="2024-07-29T01:42:25Z" level=error msg="ContainerStats resp: {0x400088a840 linux}"
	Jul 29 01:42:26 running-upgrade-638000 cri-dockerd[3061]: time="2024-07-29T01:42:26Z" level=error msg="ContainerStats resp: {0x40009ca600 linux}"
	Jul 29 01:42:26 running-upgrade-638000 cri-dockerd[3061]: time="2024-07-29T01:42:26Z" level=error msg="ContainerStats resp: {0x40009caa00 linux}"
	Jul 29 01:42:26 running-upgrade-638000 cri-dockerd[3061]: time="2024-07-29T01:42:26Z" level=error msg="ContainerStats resp: {0x40009cad80 linux}"
	Jul 29 01:42:26 running-upgrade-638000 cri-dockerd[3061]: time="2024-07-29T01:42:26Z" level=error msg="ContainerStats resp: {0x400088b440 linux}"
	Jul 29 01:42:26 running-upgrade-638000 cri-dockerd[3061]: time="2024-07-29T01:42:26Z" level=error msg="ContainerStats resp: {0x400088b900 linux}"
	Jul 29 01:42:26 running-upgrade-638000 cri-dockerd[3061]: time="2024-07-29T01:42:26Z" level=error msg="ContainerStats resp: {0x400088bd40 linux}"
	Jul 29 01:42:26 running-upgrade-638000 cri-dockerd[3061]: time="2024-07-29T01:42:26Z" level=error msg="ContainerStats resp: {0x40004f2280 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	378e0862dbc27       edaa71f2aee88       13 seconds ago      Running             coredns                   2                   88abaab968eb8
	11b4ec44ef8f4       edaa71f2aee88       13 seconds ago      Running             coredns                   2                   d97a9a9a7277a
	b1509271f90da       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   d97a9a9a7277a
	824cc518b6a9d       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   88abaab968eb8
	1fcc3a6456809       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   1ca75508c924a
	571bf51b1ca13       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   01451ac4038eb
	c7a431a27ba33       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   15939c35549aa
	8f1e7bd03878b       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   7b4ebfd2a2f4f
	bb655071eb1c0       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   ec3ca846983c5
	45ee255a36d77       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   7fbe690781a0f
	
	
	==> coredns [11b4ec44ef8f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5245224222967645544.68961502732033151. HINFO: read udp 10.244.0.3:51824->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5245224222967645544.68961502732033151. HINFO: read udp 10.244.0.3:37295->10.0.2.3:53: i/o timeout
	
	
	==> coredns [378e0862dbc2] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 719574543800355477.8317655666217400138. HINFO: read udp 10.244.0.2:50330->10.0.2.3:53: i/o timeout
	
	
	==> coredns [824cc518b6a9] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5733169817962659976.3658868027426337273. HINFO: read udp 10.244.0.2:51816->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5733169817962659976.3658868027426337273. HINFO: read udp 10.244.0.2:48261->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5733169817962659976.3658868027426337273. HINFO: read udp 10.244.0.2:45086->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5733169817962659976.3658868027426337273. HINFO: read udp 10.244.0.2:41374->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5733169817962659976.3658868027426337273. HINFO: read udp 10.244.0.2:52557->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5733169817962659976.3658868027426337273. HINFO: read udp 10.244.0.2:58201->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5733169817962659976.3658868027426337273. HINFO: read udp 10.244.0.2:58558->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5733169817962659976.3658868027426337273. HINFO: read udp 10.244.0.2:39783->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5733169817962659976.3658868027426337273. HINFO: read udp 10.244.0.2:54048->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5733169817962659976.3658868027426337273. HINFO: read udp 10.244.0.2:45898->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b1509271f90d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1225771382164232036.9081781896443041330. HINFO: read udp 10.244.0.3:50339->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1225771382164232036.9081781896443041330. HINFO: read udp 10.244.0.3:38596->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1225771382164232036.9081781896443041330. HINFO: read udp 10.244.0.3:41744->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1225771382164232036.9081781896443041330. HINFO: read udp 10.244.0.3:37135->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1225771382164232036.9081781896443041330. HINFO: read udp 10.244.0.3:57354->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1225771382164232036.9081781896443041330. HINFO: read udp 10.244.0.3:51488->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1225771382164232036.9081781896443041330. HINFO: read udp 10.244.0.3:50567->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1225771382164232036.9081781896443041330. HINFO: read udp 10.244.0.3:51681->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1225771382164232036.9081781896443041330. HINFO: read udp 10.244.0.3:42290->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1225771382164232036.9081781896443041330. HINFO: read udp 10.244.0.3:42703->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-638000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-638000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=running-upgrade-638000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_28T18_38_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:38:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-638000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:42:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:38:10 +0000   Mon, 29 Jul 2024 01:38:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:38:10 +0000   Mon, 29 Jul 2024 01:38:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:38:10 +0000   Mon, 29 Jul 2024 01:38:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:38:10 +0000   Mon, 29 Jul 2024 01:38:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-638000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 ceb440cb48da4690b0ca1d06134e694d
	  System UUID:                ceb440cb48da4690b0ca1d06134e694d
	  Boot ID:                    478faae5-a4c6-4590-85e6-6babc2326784
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2jcn2                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-wmrbz                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-638000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-638000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-controller-manager-running-upgrade-638000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-proxy-vhcmz                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-running-upgrade-638000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  Starting                 4m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-638000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-638000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-638000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-638000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-638000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-638000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-638000 status is now: NodeReady
	  Normal  RegisteredNode           4m3s                   node-controller  Node running-upgrade-638000 event: Registered Node running-upgrade-638000 in Controller
	
	
	==> dmesg <==
	[  +1.319735] systemd-fstab-generator[883]: Ignoring "noauto" for root device
	[  +0.077268] systemd-fstab-generator[894]: Ignoring "noauto" for root device
	[  +0.080137] systemd-fstab-generator[905]: Ignoring "noauto" for root device
	[  +1.228478] systemd-fstab-generator[1055]: Ignoring "noauto" for root device
	[  +0.080673] systemd-fstab-generator[1066]: Ignoring "noauto" for root device
	[  +2.718325] systemd-fstab-generator[1297]: Ignoring "noauto" for root device
	[  +0.228006] kauditd_printk_skb: 92 callbacks suppressed
	[  +8.931086] systemd-fstab-generator[1934]: Ignoring "noauto" for root device
	[  +2.569075] systemd-fstab-generator[2212]: Ignoring "noauto" for root device
	[  +0.149075] systemd-fstab-generator[2248]: Ignoring "noauto" for root device
	[  +0.099043] systemd-fstab-generator[2261]: Ignoring "noauto" for root device
	[  +0.095933] systemd-fstab-generator[2274]: Ignoring "noauto" for root device
	[  +2.623734] kauditd_printk_skb: 8 callbacks suppressed
	[  +0.203509] systemd-fstab-generator[3017]: Ignoring "noauto" for root device
	[  +0.079348] systemd-fstab-generator[3029]: Ignoring "noauto" for root device
	[  +0.077836] systemd-fstab-generator[3040]: Ignoring "noauto" for root device
	[  +0.092709] systemd-fstab-generator[3054]: Ignoring "noauto" for root device
	[  +2.307224] systemd-fstab-generator[3206]: Ignoring "noauto" for root device
	[  +2.244267] systemd-fstab-generator[3566]: Ignoring "noauto" for root device
	[  +0.976729] systemd-fstab-generator[3713]: Ignoring "noauto" for root device
	[Jul29 01:34] kauditd_printk_skb: 68 callbacks suppressed
	[Jul29 01:38] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.451475] systemd-fstab-generator[11938]: Ignoring "noauto" for root device
	[  +5.639477] systemd-fstab-generator[12542]: Ignoring "noauto" for root device
	[  +0.469709] systemd-fstab-generator[12672]: Ignoring "noauto" for root device
	
	
	==> etcd [c7a431a27ba3] <==
	{"level":"info","ts":"2024-07-29T01:38:06.228Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T01:38:06.228Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-29T01:38:06.230Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-29T01:38:06.230Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-29T01:38:06.230Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T01:38:06.231Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T01:38:06.231Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T01:38:06.999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T01:38:06.999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T01:38:06.999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-29T01:38:06.999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T01:38:06.999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T01:38:06.999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T01:38:06.999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T01:38:06.999Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-638000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T01:38:06.999Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:38:06.999Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T01:38:07.000Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-29T01:38:07.000Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T01:38:07.001Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T01:38:07.001Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T01:38:07.001Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T01:38:07.003Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:38:07.003Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:38:07.003Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 01:42:28 up 9 min,  0 users,  load average: 0.15, 0.24, 0.17
	Linux running-upgrade-638000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [45ee255a36d7] <==
	I0729 01:38:08.205701       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0729 01:38:08.231870       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0729 01:38:08.231898       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 01:38:08.233468       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 01:38:08.234061       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0729 01:38:08.234276       1 cache.go:39] Caches are synced for autoregister controller
	I0729 01:38:08.253761       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0729 01:38:08.970989       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0729 01:38:09.142478       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0729 01:38:09.148193       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0729 01:38:09.148223       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 01:38:09.301210       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 01:38:09.313202       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 01:38:09.398902       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0729 01:38:09.402538       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0729 01:38:09.402995       1 controller.go:611] quota admission added evaluator for: endpoints
	I0729 01:38:09.404285       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 01:38:10.288047       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0729 01:38:10.626472       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0729 01:38:10.633995       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0729 01:38:10.639739       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0729 01:38:10.683236       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 01:38:24.107103       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0729 01:38:24.342326       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0729 01:38:24.819824       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [bb655071eb1c] <==
	I0729 01:38:24.099245       1 shared_informer.go:262] Caches are synced for node
	I0729 01:38:24.099308       1 range_allocator.go:173] Starting range CIDR allocator
	I0729 01:38:24.099326       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0729 01:38:24.099345       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0729 01:38:24.100858       1 shared_informer.go:262] Caches are synced for deployment
	I0729 01:38:24.102002       1 shared_informer.go:262] Caches are synced for PV protection
	I0729 01:38:24.103079       1 range_allocator.go:374] Set node running-upgrade-638000 PodCIDR to [10.244.0.0/24]
	I0729 01:38:24.108407       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0729 01:38:24.113025       1 shared_informer.go:262] Caches are synced for namespace
	I0729 01:38:24.116253       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-wmrbz"
	I0729 01:38:24.119798       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-2jcn2"
	I0729 01:38:24.138790       1 shared_informer.go:262] Caches are synced for cronjob
	I0729 01:38:24.293387       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 01:38:24.314860       1 shared_informer.go:262] Caches are synced for taint
	I0729 01:38:24.314953       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0729 01:38:24.315011       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-638000. Assuming now as a timestamp.
	I0729 01:38:24.315073       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0729 01:38:24.315137       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0729 01:38:24.315272       1 event.go:294] "Event occurred" object="running-upgrade-638000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-638000 event: Registered Node running-upgrade-638000 in Controller"
	I0729 01:38:24.320458       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 01:38:24.337237       1 shared_informer.go:262] Caches are synced for daemon sets
	I0729 01:38:24.345051       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vhcmz"
	I0729 01:38:24.732443       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 01:38:24.790029       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 01:38:24.790042       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [571bf51b1ca1] <==
	I0729 01:38:24.810120       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0729 01:38:24.810143       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0729 01:38:24.810152       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0729 01:38:24.818057       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0729 01:38:24.818068       1 server_others.go:206] "Using iptables Proxier"
	I0729 01:38:24.818081       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0729 01:38:24.818166       1 server.go:661] "Version info" version="v1.24.1"
	I0729 01:38:24.818170       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:38:24.818395       1 config.go:317] "Starting service config controller"
	I0729 01:38:24.818404       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0729 01:38:24.818416       1 config.go:226] "Starting endpoint slice config controller"
	I0729 01:38:24.818418       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0729 01:38:24.818644       1 config.go:444] "Starting node config controller"
	I0729 01:38:24.818646       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0729 01:38:24.918537       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0729 01:38:24.918541       1 shared_informer.go:262] Caches are synced for service config
	I0729 01:38:24.918855       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [8f1e7bd03878] <==
	W0729 01:38:08.183783       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 01:38:08.183795       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 01:38:08.183822       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 01:38:08.183852       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 01:38:08.183881       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 01:38:08.183916       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 01:38:08.183946       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 01:38:08.183972       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 01:38:08.184015       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 01:38:08.184036       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 01:38:08.184078       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 01:38:08.184117       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 01:38:08.184173       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 01:38:08.184208       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 01:38:09.006944       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 01:38:09.006988       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 01:38:09.041246       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 01:38:09.041306       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 01:38:09.097110       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 01:38:09.097167       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 01:38:09.124461       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 01:38:09.124553       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 01:38:09.219916       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 01:38:09.220060       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0729 01:38:11.376414       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-07-29 01:33:21 UTC, ends at Mon 2024-07-29 01:42:28 UTC. --
	Jul 29 01:38:11 running-upgrade-638000 kubelet[12548]: E0729 01:38:11.259339   12548 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-638000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-638000"
	Jul 29 01:38:11 running-upgrade-638000 kubelet[12548]: I0729 01:38:11.658562   12548 apiserver.go:52] "Watching apiserver"
	Jul 29 01:38:12 running-upgrade-638000 kubelet[12548]: I0729 01:38:12.087701   12548 reconciler.go:157] "Reconciler: start to sync state"
	Jul 29 01:38:12 running-upgrade-638000 kubelet[12548]: E0729 01:38:12.258366   12548 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-638000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-638000"
	Jul 29 01:38:12 running-upgrade-638000 kubelet[12548]: E0729 01:38:12.459314   12548 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-638000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-638000"
	Jul 29 01:38:12 running-upgrade-638000 kubelet[12548]: E0729 01:38:12.659119   12548 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-638000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-638000"
	Jul 29 01:38:12 running-upgrade-638000 kubelet[12548]: I0729 01:38:12.860646   12548 request.go:601] Waited for 1.00226596s due to client-side throttling, not priority and fairness, request: GET:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-running-upgrade-638000
	Jul 29 01:38:24 running-upgrade-638000 kubelet[12548]: I0729 01:38:24.201734   12548 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 01:38:24 running-upgrade-638000 kubelet[12548]: I0729 01:38:24.202101   12548 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 01:38:24 running-upgrade-638000 kubelet[12548]: I0729 01:38:24.324197   12548 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 01:38:24 running-upgrade-638000 kubelet[12548]: I0729 01:38:24.347752   12548 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 01:38:24 running-upgrade-638000 kubelet[12548]: I0729 01:38:24.402354   12548 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4b3255cf-5b3d-4964-84f8-7708ab604603-tmp\") pod \"storage-provisioner\" (UID: \"4b3255cf-5b3d-4964-84f8-7708ab604603\") " pod="kube-system/storage-provisioner"
	Jul 29 01:38:24 running-upgrade-638000 kubelet[12548]: I0729 01:38:24.402473   12548 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/520d0517-b312-4fb9-8980-9da024b023e6-xtables-lock\") pod \"kube-proxy-vhcmz\" (UID: \"520d0517-b312-4fb9-8980-9da024b023e6\") " pod="kube-system/kube-proxy-vhcmz"
	Jul 29 01:38:24 running-upgrade-638000 kubelet[12548]: I0729 01:38:24.402508   12548 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49qrx\" (UniqueName: \"kubernetes.io/projected/520d0517-b312-4fb9-8980-9da024b023e6-kube-api-access-49qrx\") pod \"kube-proxy-vhcmz\" (UID: \"520d0517-b312-4fb9-8980-9da024b023e6\") " pod="kube-system/kube-proxy-vhcmz"
	Jul 29 01:38:24 running-upgrade-638000 kubelet[12548]: I0729 01:38:24.402521   12548 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/520d0517-b312-4fb9-8980-9da024b023e6-lib-modules\") pod \"kube-proxy-vhcmz\" (UID: \"520d0517-b312-4fb9-8980-9da024b023e6\") " pod="kube-system/kube-proxy-vhcmz"
	Jul 29 01:38:24 running-upgrade-638000 kubelet[12548]: I0729 01:38:24.402542   12548 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhh6c\" (UniqueName: \"kubernetes.io/projected/4b3255cf-5b3d-4964-84f8-7708ab604603-kube-api-access-bhh6c\") pod \"storage-provisioner\" (UID: \"4b3255cf-5b3d-4964-84f8-7708ab604603\") " pod="kube-system/storage-provisioner"
	Jul 29 01:38:24 running-upgrade-638000 kubelet[12548]: I0729 01:38:24.402568   12548 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/520d0517-b312-4fb9-8980-9da024b023e6-kube-proxy\") pod \"kube-proxy-vhcmz\" (UID: \"520d0517-b312-4fb9-8980-9da024b023e6\") " pod="kube-system/kube-proxy-vhcmz"
	Jul 29 01:38:26 running-upgrade-638000 kubelet[12548]: I0729 01:38:26.097998   12548 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 01:38:26 running-upgrade-638000 kubelet[12548]: I0729 01:38:26.100399   12548 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 01:38:26 running-upgrade-638000 kubelet[12548]: I0729 01:38:26.119148   12548 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c078715-a273-43b1-b6b3-8c2f060457a8-config-volume\") pod \"coredns-6d4b75cb6d-2jcn2\" (UID: \"4c078715-a273-43b1-b6b3-8c2f060457a8\") " pod="kube-system/coredns-6d4b75cb6d-2jcn2"
	Jul 29 01:38:26 running-upgrade-638000 kubelet[12548]: I0729 01:38:26.119199   12548 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drql2\" (UniqueName: \"kubernetes.io/projected/4c078715-a273-43b1-b6b3-8c2f060457a8-kube-api-access-drql2\") pod \"coredns-6d4b75cb6d-2jcn2\" (UID: \"4c078715-a273-43b1-b6b3-8c2f060457a8\") " pod="kube-system/coredns-6d4b75cb6d-2jcn2"
	Jul 29 01:38:26 running-upgrade-638000 kubelet[12548]: I0729 01:38:26.219520   12548 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps6xw\" (UniqueName: \"kubernetes.io/projected/a65367c7-aea2-4de0-9083-59322755368c-kube-api-access-ps6xw\") pod \"coredns-6d4b75cb6d-wmrbz\" (UID: \"a65367c7-aea2-4de0-9083-59322755368c\") " pod="kube-system/coredns-6d4b75cb6d-wmrbz"
	Jul 29 01:38:26 running-upgrade-638000 kubelet[12548]: I0729 01:38:26.219544   12548 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a65367c7-aea2-4de0-9083-59322755368c-config-volume\") pod \"coredns-6d4b75cb6d-wmrbz\" (UID: \"a65367c7-aea2-4de0-9083-59322755368c\") " pod="kube-system/coredns-6d4b75cb6d-wmrbz"
	Jul 29 01:42:14 running-upgrade-638000 kubelet[12548]: I0729 01:42:14.968817   12548 scope.go:110] "RemoveContainer" containerID="769aaacac2ed29ced82f71ce8a8aaddffb7dfeff52a65608b7fdffdfa195e27e"
	Jul 29 01:42:14 running-upgrade-638000 kubelet[12548]: I0729 01:42:14.984589   12548 scope.go:110] "RemoveContainer" containerID="92d3c820798b62bbda72a9d51c83fa8f1b3c8e6f20d52296761b6d666e32322f"
	
	
	==> storage-provisioner [1fcc3a645680] <==
	I0729 01:38:25.120276       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 01:38:25.123926       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 01:38:25.123991       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 01:38:25.127424       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 01:38:25.127544       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-638000_3f3b9861-2fba-4213-9b5d-8fb1ea36ad01!
	I0729 01:38:25.128045       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2b03b3fa-db30-42ee-a07a-82f01ed4361d", APIVersion:"v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-638000_3f3b9861-2fba-4213-9b5d-8fb1ea36ad01 became leader
	I0729 01:38:25.227921       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-638000_3f3b9861-2fba-4213-9b5d-8fb1ea36ad01!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-638000 -n running-upgrade-638000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-638000 -n running-upgrade-638000: exit status 2 (15.549903125s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-638000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-638000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-638000
--- FAIL: TestRunningBinaryUpgrade (589.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.01s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-980000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-980000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.880117041s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-980000" primary control-plane node in "kubernetes-upgrade-980000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-980000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:35:56.970671    4864 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:35:56.970781    4864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:35:56.970784    4864 out.go:304] Setting ErrFile to fd 2...
	I0728 18:35:56.970786    4864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:35:56.970912    4864 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:35:56.972058    4864 out.go:298] Setting JSON to false
	I0728 18:35:56.988626    4864 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3927,"bootTime":1722213029,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:35:56.988713    4864 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:35:56.994094    4864 out.go:177] * [kubernetes-upgrade-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:35:56.999616    4864 notify.go:220] Checking for updates...
	I0728 18:35:57.004112    4864 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:35:57.010925    4864 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:35:57.015086    4864 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:35:57.019117    4864 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:35:57.022098    4864 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:35:57.026050    4864 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:35:57.029450    4864 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:35:57.029532    4864 config.go:182] Loaded profile config "running-upgrade-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:35:57.029575    4864 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:35:57.034062    4864 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:35:57.041085    4864 start.go:297] selected driver: qemu2
	I0728 18:35:57.041090    4864 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:35:57.041095    4864 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:35:57.043393    4864 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:35:57.046065    4864 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:35:57.049183    4864 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0728 18:35:57.049213    4864 cni.go:84] Creating CNI manager for ""
	I0728 18:35:57.049223    4864 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0728 18:35:57.049243    4864 start.go:340] cluster config:
	{Name:kubernetes-upgrade-980000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:35:57.052775    4864 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:35:57.060052    4864 out.go:177] * Starting "kubernetes-upgrade-980000" primary control-plane node in "kubernetes-upgrade-980000" cluster
	I0728 18:35:57.064068    4864 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0728 18:35:57.064101    4864 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0728 18:35:57.064109    4864 cache.go:56] Caching tarball of preloaded images
	I0728 18:35:57.064172    4864 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:35:57.064178    4864 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0728 18:35:57.064237    4864 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/kubernetes-upgrade-980000/config.json ...
	I0728 18:35:57.064248    4864 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/kubernetes-upgrade-980000/config.json: {Name:mk030e831ba3b26f2c50c745d72f5c6c4cc8bb1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:35:57.064510    4864 start.go:360] acquireMachinesLock for kubernetes-upgrade-980000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:35:57.064550    4864 start.go:364] duration metric: took 30.292µs to acquireMachinesLock for "kubernetes-upgrade-980000"
	I0728 18:35:57.064563    4864 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:35:57.064595    4864 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:35:57.073063    4864 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:35:57.091251    4864 start.go:159] libmachine.API.Create for "kubernetes-upgrade-980000" (driver="qemu2")
	I0728 18:35:57.091281    4864 client.go:168] LocalClient.Create starting
	I0728 18:35:57.091352    4864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:35:57.091385    4864 main.go:141] libmachine: Decoding PEM data...
	I0728 18:35:57.091393    4864 main.go:141] libmachine: Parsing certificate...
	I0728 18:35:57.091437    4864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:35:57.091463    4864 main.go:141] libmachine: Decoding PEM data...
	I0728 18:35:57.091472    4864 main.go:141] libmachine: Parsing certificate...
	I0728 18:35:57.091893    4864 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:35:57.405775    4864 main.go:141] libmachine: Creating SSH key...
	I0728 18:35:57.483390    4864 main.go:141] libmachine: Creating Disk image...
	I0728 18:35:57.483398    4864 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:35:57.483611    4864 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/disk.qcow2
	I0728 18:35:57.493014    4864 main.go:141] libmachine: STDOUT: 
	I0728 18:35:57.493034    4864 main.go:141] libmachine: STDERR: 
	I0728 18:35:57.493085    4864 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/disk.qcow2 +20000M
	I0728 18:35:57.500972    4864 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:35:57.500986    4864 main.go:141] libmachine: STDERR: 
	I0728 18:35:57.501009    4864 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/disk.qcow2
	I0728 18:35:57.501013    4864 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:35:57.501025    4864 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:35:57.501053    4864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:8f:f8:ba:22:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/disk.qcow2
	I0728 18:35:57.502629    4864 main.go:141] libmachine: STDOUT: 
	I0728 18:35:57.502643    4864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:35:57.502660    4864 client.go:171] duration metric: took 411.37575ms to LocalClient.Create
	I0728 18:35:59.504841    4864 start.go:128] duration metric: took 2.440226333s to createHost
	I0728 18:35:59.504926    4864 start.go:83] releasing machines lock for "kubernetes-upgrade-980000", held for 2.440367417s
	W0728 18:35:59.504985    4864 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:35:59.519127    4864 out.go:177] * Deleting "kubernetes-upgrade-980000" in qemu2 ...
	W0728 18:35:59.546669    4864 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:35:59.546703    4864 start.go:729] Will try again in 5 seconds ...
	I0728 18:36:04.548811    4864 start.go:360] acquireMachinesLock for kubernetes-upgrade-980000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:36:04.548920    4864 start.go:364] duration metric: took 89.709µs to acquireMachinesLock for "kubernetes-upgrade-980000"
	I0728 18:36:04.548937    4864 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:36:04.548965    4864 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:36:04.557182    4864 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:36:04.573495    4864 start.go:159] libmachine.API.Create for "kubernetes-upgrade-980000" (driver="qemu2")
	I0728 18:36:04.573529    4864 client.go:168] LocalClient.Create starting
	I0728 18:36:04.573619    4864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:36:04.573663    4864 main.go:141] libmachine: Decoding PEM data...
	I0728 18:36:04.573670    4864 main.go:141] libmachine: Parsing certificate...
	I0728 18:36:04.573705    4864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:36:04.573728    4864 main.go:141] libmachine: Decoding PEM data...
	I0728 18:36:04.573737    4864 main.go:141] libmachine: Parsing certificate...
	I0728 18:36:04.574048    4864 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:36:04.721311    4864 main.go:141] libmachine: Creating SSH key...
	I0728 18:36:04.760686    4864 main.go:141] libmachine: Creating Disk image...
	I0728 18:36:04.760695    4864 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:36:04.760946    4864 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/disk.qcow2
	I0728 18:36:04.770923    4864 main.go:141] libmachine: STDOUT: 
	I0728 18:36:04.770949    4864 main.go:141] libmachine: STDERR: 
	I0728 18:36:04.771002    4864 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/disk.qcow2 +20000M
	I0728 18:36:04.779994    4864 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:36:04.780013    4864 main.go:141] libmachine: STDERR: 
	I0728 18:36:04.780025    4864 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/disk.qcow2
	I0728 18:36:04.780029    4864 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:36:04.780041    4864 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:36:04.780087    4864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:e5:41:9e:f2:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/disk.qcow2
	I0728 18:36:04.782112    4864 main.go:141] libmachine: STDOUT: 
	I0728 18:36:04.782131    4864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:36:04.782143    4864 client.go:171] duration metric: took 208.610125ms to LocalClient.Create
	I0728 18:36:06.784325    4864 start.go:128] duration metric: took 2.235330334s to createHost
	I0728 18:36:06.784396    4864 start.go:83] releasing machines lock for "kubernetes-upgrade-980000", held for 2.235464209s
	W0728 18:36:06.784647    4864 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-980000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-980000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:36:06.796196    4864 out.go:177] 
	W0728 18:36:06.799345    4864 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:36:06.799370    4864 out.go:239] * 
	* 
	W0728 18:36:06.801731    4864 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:36:06.810184    4864 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-980000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-980000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-980000: (2.720669292s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-980000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-980000 status --format={{.Host}}: exit status 7 (53.278541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-980000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-980000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.182679666s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-980000" primary control-plane node in "kubernetes-upgrade-980000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:36:09.629239    4901 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:36:09.629373    4901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:36:09.629376    4901 out.go:304] Setting ErrFile to fd 2...
	I0728 18:36:09.629378    4901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:36:09.629515    4901 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:36:09.630537    4901 out.go:298] Setting JSON to false
	I0728 18:36:09.646718    4901 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3940,"bootTime":1722213029,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:36:09.646812    4901 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:36:09.652089    4901 out.go:177] * [kubernetes-upgrade-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:36:09.659126    4901 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:36:09.659217    4901 notify.go:220] Checking for updates...
	I0728 18:36:09.666024    4901 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:36:09.669065    4901 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:36:09.672020    4901 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:36:09.675053    4901 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:36:09.678044    4901 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:36:09.681226    4901 config.go:182] Loaded profile config "kubernetes-upgrade-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0728 18:36:09.681479    4901 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:36:09.686046    4901 out.go:177] * Using the qemu2 driver based on existing profile
	I0728 18:36:09.692934    4901 start.go:297] selected driver: qemu2
	I0728 18:36:09.692940    4901 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:36:09.692984    4901 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:36:09.695379    4901 cni.go:84] Creating CNI manager for ""
	I0728 18:36:09.695397    4901 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:36:09.695424    4901 start.go:340] cluster config:
	{Name:kubernetes-upgrade-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-980000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:36:09.698955    4901 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:36:09.706915    4901 out.go:177] * Starting "kubernetes-upgrade-980000" primary control-plane node in "kubernetes-upgrade-980000" cluster
	I0728 18:36:09.711015    4901 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0728 18:36:09.711036    4901 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0728 18:36:09.711052    4901 cache.go:56] Caching tarball of preloaded images
	I0728 18:36:09.711118    4901 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:36:09.711124    4901 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0728 18:36:09.711193    4901 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/kubernetes-upgrade-980000/config.json ...
	I0728 18:36:09.711727    4901 start.go:360] acquireMachinesLock for kubernetes-upgrade-980000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:36:09.711756    4901 start.go:364] duration metric: took 23.042µs to acquireMachinesLock for "kubernetes-upgrade-980000"
	I0728 18:36:09.711766    4901 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:36:09.711774    4901 fix.go:54] fixHost starting: 
	I0728 18:36:09.711895    4901 fix.go:112] recreateIfNeeded on kubernetes-upgrade-980000: state=Stopped err=<nil>
	W0728 18:36:09.711903    4901 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:36:09.718997    4901 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-980000" ...
	I0728 18:36:09.723030    4901 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:36:09.723065    4901 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:e5:41:9e:f2:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/disk.qcow2
	I0728 18:36:09.725092    4901 main.go:141] libmachine: STDOUT: 
	I0728 18:36:09.725109    4901 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:36:09.725144    4901 fix.go:56] duration metric: took 13.371125ms for fixHost
	I0728 18:36:09.725148    4901 start.go:83] releasing machines lock for "kubernetes-upgrade-980000", held for 13.387916ms
	W0728 18:36:09.725154    4901 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:36:09.725194    4901 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:36:09.725199    4901 start.go:729] Will try again in 5 seconds ...
	I0728 18:36:14.727401    4901 start.go:360] acquireMachinesLock for kubernetes-upgrade-980000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:36:14.727888    4901 start.go:364] duration metric: took 390.875µs to acquireMachinesLock for "kubernetes-upgrade-980000"
	I0728 18:36:14.727966    4901 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:36:14.727987    4901 fix.go:54] fixHost starting: 
	I0728 18:36:14.728732    4901 fix.go:112] recreateIfNeeded on kubernetes-upgrade-980000: state=Stopped err=<nil>
	W0728 18:36:14.728758    4901 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:36:14.737311    4901 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-980000" ...
	I0728 18:36:14.740382    4901 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:36:14.740693    4901 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:e5:41:9e:f2:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubernetes-upgrade-980000/disk.qcow2
	I0728 18:36:14.749292    4901 main.go:141] libmachine: STDOUT: 
	I0728 18:36:14.749347    4901 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:36:14.749429    4901 fix.go:56] duration metric: took 21.444333ms for fixHost
	I0728 18:36:14.749449    4901 start.go:83] releasing machines lock for "kubernetes-upgrade-980000", held for 21.538667ms
	W0728 18:36:14.749605    4901 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-980000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-980000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:36:14.757287    4901 out.go:177] 
	W0728 18:36:14.760350    4901 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:36:14.760363    4901 out.go:239] * 
	* 
	W0728 18:36:14.761866    4901 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:36:14.771382    4901 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-980000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-980000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-980000 version --output=json: exit status 1 (60.559ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-980000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-28 18:36:14.845354 -0700 PDT m=+3025.402084626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-980000 -n kubernetes-upgrade-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-980000 -n kubernetes-upgrade-980000: exit status 7 (33.074875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-980000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-980000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-980000
--- FAIL: TestKubernetesUpgrade (18.01s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.72s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19312
- KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2351242226/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.72s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.71s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19312
- KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1673934117/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (562.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2447886424 start -p stopped-upgrade-278000 --memory=2200 --vm-driver=qemu2 
E0728 18:36:33.754062    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2447886424 start -p stopped-upgrade-278000 --memory=2200 --vm-driver=qemu2 : (39.661902167s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2447886424 -p stopped-upgrade-278000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2447886424 -p stopped-upgrade-278000 stop: (3.090254s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-278000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0728 18:39:56.727969    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 18:41:33.714123    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-278000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m39.944171667s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-278000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-278000" primary control-plane node in "stopped-upgrade-278000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-278000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:36:59.153745    4935 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:36:59.153919    4935 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:36:59.153923    4935 out.go:304] Setting ErrFile to fd 2...
	I0728 18:36:59.153926    4935 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:36:59.154084    4935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:36:59.155213    4935 out.go:298] Setting JSON to false
	I0728 18:36:59.173325    4935 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3990,"bootTime":1722213029,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:36:59.173395    4935 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:36:59.177563    4935 out.go:177] * [stopped-upgrade-278000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:36:59.185497    4935 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:36:59.185538    4935 notify.go:220] Checking for updates...
	I0728 18:36:59.192021    4935 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:36:59.195478    4935 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:36:59.199557    4935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:36:59.200959    4935 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:36:59.204505    4935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:36:59.207860    4935 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:36:59.211506    4935 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0728 18:36:59.214598    4935 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:36:59.218489    4935 out.go:177] * Using the qemu2 driver based on existing profile
	I0728 18:36:59.225454    4935 start.go:297] selected driver: qemu2
	I0728 18:36:59.225459    4935 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50479 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0728 18:36:59.225511    4935 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:36:59.227898    4935 cni.go:84] Creating CNI manager for ""
	I0728 18:36:59.227915    4935 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:36:59.227937    4935 start.go:340] cluster config:
	{Name:stopped-upgrade-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50479 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0728 18:36:59.227995    4935 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:36:59.236487    4935 out.go:177] * Starting "stopped-upgrade-278000" primary control-plane node in "stopped-upgrade-278000" cluster
	I0728 18:36:59.240490    4935 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0728 18:36:59.240504    4935 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0728 18:36:59.240514    4935 cache.go:56] Caching tarball of preloaded images
	I0728 18:36:59.240561    4935 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:36:59.240566    4935 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0728 18:36:59.240615    4935 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/config.json ...
	I0728 18:36:59.241085    4935 start.go:360] acquireMachinesLock for stopped-upgrade-278000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:36:59.241117    4935 start.go:364] duration metric: took 25.125µs to acquireMachinesLock for "stopped-upgrade-278000"
	I0728 18:36:59.241126    4935 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:36:59.241132    4935 fix.go:54] fixHost starting: 
	I0728 18:36:59.241239    4935 fix.go:112] recreateIfNeeded on stopped-upgrade-278000: state=Stopped err=<nil>
	W0728 18:36:59.241248    4935 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:36:59.245472    4935 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-278000" ...
	I0728 18:36:59.253526    4935 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:36:59.253624    4935 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50445-:22,hostfwd=tcp::50446-:2376,hostname=stopped-upgrade-278000 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/disk.qcow2
	I0728 18:36:59.300661    4935 main.go:141] libmachine: STDOUT: 
	I0728 18:36:59.300695    4935 main.go:141] libmachine: STDERR: 
	I0728 18:36:59.300701    4935 main.go:141] libmachine: Waiting for VM to start (ssh -p 50445 docker@127.0.0.1)...
	I0728 18:37:19.213062    4935 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/config.json ...
	I0728 18:37:19.213728    4935 machine.go:94] provisionDockerMachine start ...
	I0728 18:37:19.213949    4935 main.go:141] libmachine: Using SSH client type: native
	I0728 18:37:19.214416    4935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ffea10] 0x101001270 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0728 18:37:19.214429    4935 main.go:141] libmachine: About to run SSH command:
	hostname
	I0728 18:37:19.286908    4935 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0728 18:37:19.286931    4935 buildroot.go:166] provisioning hostname "stopped-upgrade-278000"
	I0728 18:37:19.287009    4935 main.go:141] libmachine: Using SSH client type: native
	I0728 18:37:19.287167    4935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ffea10] 0x101001270 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0728 18:37:19.287174    4935 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-278000 && echo "stopped-upgrade-278000" | sudo tee /etc/hostname
	I0728 18:37:19.341921    4935 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-278000
	
	I0728 18:37:19.341981    4935 main.go:141] libmachine: Using SSH client type: native
	I0728 18:37:19.342090    4935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ffea10] 0x101001270 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0728 18:37:19.342099    4935 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-278000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-278000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-278000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 18:37:19.396221    4935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0728 18:37:19.396232    4935 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1229/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1229/.minikube}
	I0728 18:37:19.396239    4935 buildroot.go:174] setting up certificates
	I0728 18:37:19.396243    4935 provision.go:84] configureAuth start
	I0728 18:37:19.396254    4935 provision.go:143] copyHostCerts
	I0728 18:37:19.396336    4935 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.pem, removing ...
	I0728 18:37:19.396341    4935 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.pem
	I0728 18:37:19.396517    4935 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.pem (1082 bytes)
	I0728 18:37:19.397140    4935 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1229/.minikube/cert.pem, removing ...
	I0728 18:37:19.397143    4935 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1229/.minikube/cert.pem
	I0728 18:37:19.397203    4935 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1229/.minikube/cert.pem (1123 bytes)
	I0728 18:37:19.397325    4935 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1229/.minikube/key.pem, removing ...
	I0728 18:37:19.397328    4935 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1229/.minikube/key.pem
	I0728 18:37:19.397384    4935 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1229/.minikube/key.pem (1679 bytes)
	I0728 18:37:19.397477    4935 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-278000 san=[127.0.0.1 localhost minikube stopped-upgrade-278000]
	I0728 18:37:19.653996    4935 provision.go:177] copyRemoteCerts
	I0728 18:37:19.654049    4935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 18:37:19.654060    4935 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/id_rsa Username:docker}
	I0728 18:37:19.684034    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 18:37:19.691035    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 18:37:19.697760    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0728 18:37:19.704827    4935 provision.go:87] duration metric: took 308.578958ms to configureAuth
	I0728 18:37:19.704838    4935 buildroot.go:189] setting minikube options for container-runtime
	I0728 18:37:19.704951    4935 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:37:19.704983    4935 main.go:141] libmachine: Using SSH client type: native
	I0728 18:37:19.705072    4935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ffea10] 0x101001270 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0728 18:37:19.705076    4935 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 18:37:19.754818    4935 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0728 18:37:19.754826    4935 buildroot.go:70] root file system type: tmpfs
	I0728 18:37:19.754880    4935 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 18:37:19.754926    4935 main.go:141] libmachine: Using SSH client type: native
	I0728 18:37:19.755037    4935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ffea10] 0x101001270 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0728 18:37:19.755073    4935 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 18:37:19.809771    4935 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 18:37:19.809815    4935 main.go:141] libmachine: Using SSH client type: native
	I0728 18:37:19.809920    4935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ffea10] 0x101001270 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0728 18:37:19.809929    4935 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 18:37:20.147529    4935 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0728 18:37:20.147542    4935 machine.go:97] duration metric: took 933.805041ms to provisionDockerMachine
	I0728 18:37:20.147550    4935 start.go:293] postStartSetup for "stopped-upgrade-278000" (driver="qemu2")
	I0728 18:37:20.147557    4935 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 18:37:20.147632    4935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 18:37:20.147641    4935 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/id_rsa Username:docker}
	I0728 18:37:20.175210    4935 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 18:37:20.176591    4935 info.go:137] Remote host: Buildroot 2021.02.12
	I0728 18:37:20.176599    4935 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1229/.minikube/addons for local assets ...
	I0728 18:37:20.176678    4935 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1229/.minikube/files for local assets ...
	I0728 18:37:20.176797    4935 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1229/.minikube/files/etc/ssl/certs/17282.pem -> 17282.pem in /etc/ssl/certs
	I0728 18:37:20.176928    4935 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 18:37:20.179726    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/files/etc/ssl/certs/17282.pem --> /etc/ssl/certs/17282.pem (1708 bytes)
	I0728 18:37:20.186292    4935 start.go:296] duration metric: took 38.736834ms for postStartSetup
	I0728 18:37:20.186305    4935 fix.go:56] duration metric: took 20.945184542s for fixHost
	I0728 18:37:20.186335    4935 main.go:141] libmachine: Using SSH client type: native
	I0728 18:37:20.186433    4935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ffea10] 0x101001270 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0728 18:37:20.186437    4935 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0728 18:37:20.236512    4935 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722217040.154939754
	
	I0728 18:37:20.236521    4935 fix.go:216] guest clock: 1722217040.154939754
	I0728 18:37:20.236525    4935 fix.go:229] Guest: 2024-07-28 18:37:20.154939754 -0700 PDT Remote: 2024-07-28 18:37:20.186307 -0700 PDT m=+21.057955834 (delta=-31.367246ms)
	I0728 18:37:20.236536    4935 fix.go:200] guest clock delta is within tolerance: -31.367246ms
	I0728 18:37:20.236539    4935 start.go:83] releasing machines lock for "stopped-upgrade-278000", held for 20.995427s
	I0728 18:37:20.236606    4935 ssh_runner.go:195] Run: cat /version.json
	I0728 18:37:20.236616    4935 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/id_rsa Username:docker}
	I0728 18:37:20.236621    4935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0728 18:37:20.236640    4935 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/id_rsa Username:docker}
	W0728 18:37:20.261470    4935 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0728 18:37:20.261532    4935 ssh_runner.go:195] Run: systemctl --version
	I0728 18:37:20.263930    4935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0728 18:37:20.266147    4935 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0728 18:37:20.266181    4935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0728 18:37:20.270190    4935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0728 18:37:20.274796    4935 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0728 18:37:20.274807    4935 start.go:495] detecting cgroup driver to use...
	I0728 18:37:20.274879    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:37:20.283480    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0728 18:37:20.286583    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0728 18:37:20.289644    4935 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0728 18:37:20.289668    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0728 18:37:20.292997    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:37:20.296515    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0728 18:37:20.300224    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0728 18:37:20.303382    4935 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0728 18:37:20.306475    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0728 18:37:20.309480    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0728 18:37:20.312975    4935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0728 18:37:20.316148    4935 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 18:37:20.318882    4935 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 18:37:20.321395    4935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:37:20.385497    4935 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 18:37:20.396134    4935 start.go:495] detecting cgroup driver to use...
	I0728 18:37:20.396196    4935 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 18:37:20.404454    4935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:37:20.443309    4935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0728 18:37:20.449824    4935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0728 18:37:20.454736    4935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:37:20.459213    4935 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 18:37:20.516982    4935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 18:37:20.522135    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 18:37:20.527128    4935 ssh_runner.go:195] Run: which cri-dockerd
	I0728 18:37:20.528322    4935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 18:37:20.530924    4935 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0728 18:37:20.535592    4935 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 18:37:20.597945    4935 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 18:37:20.662015    4935 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0728 18:37:20.662076    4935 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0728 18:37:20.667331    4935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:37:20.730068    4935 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:37:21.893506    4935 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.163420708s)
	I0728 18:37:21.893557    4935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0728 18:37:21.898596    4935 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0728 18:37:21.904425    4935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:37:21.908770    4935 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0728 18:37:21.974487    4935 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 18:37:22.058446    4935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:37:22.122940    4935 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0728 18:37:22.128941    4935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0728 18:37:22.134052    4935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:37:22.186998    4935 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0728 18:37:22.225098    4935 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 18:37:22.225177    4935 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 18:37:22.229077    4935 start.go:563] Will wait 60s for crictl version
	I0728 18:37:22.229142    4935 ssh_runner.go:195] Run: which crictl
	I0728 18:37:22.230579    4935 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0728 18:37:22.244854    4935 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0728 18:37:22.244929    4935 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:37:22.260791    4935 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 18:37:22.280278    4935 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0728 18:37:22.280342    4935 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0728 18:37:22.281753    4935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:37:22.285525    4935 kubeadm.go:883] updating cluster {Name:stopped-upgrade-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50479 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0728 18:37:22.285572    4935 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0728 18:37:22.285611    4935 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 18:37:22.295982    4935 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0728 18:37:22.295989    4935 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0728 18:37:22.296030    4935 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0728 18:37:22.299433    4935 ssh_runner.go:195] Run: which lz4
	I0728 18:37:22.300725    4935 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0728 18:37:22.302042    4935 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0728 18:37:22.302053    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0728 18:37:23.253604    4935 docker.go:649] duration metric: took 952.908041ms to copy over tarball
	I0728 18:37:23.253665    4935 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0728 18:37:24.453469    4935 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.199787541s)
	I0728 18:37:24.453484    4935 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0728 18:37:24.469418    4935 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0728 18:37:24.472802    4935 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0728 18:37:24.477960    4935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:37:24.542509    4935 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 18:37:26.184629    4935 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.642102208s)
	I0728 18:37:26.184722    4935 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 18:37:26.201487    4935 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0728 18:37:26.201495    4935 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0728 18:37:26.201500    4935 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0728 18:37:26.206738    4935 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0728 18:37:26.208450    4935 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:37:26.210226    4935 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0728 18:37:26.210263    4935 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0728 18:37:26.212225    4935 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:37:26.212282    4935 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0728 18:37:26.213824    4935 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0728 18:37:26.214226    4935 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0728 18:37:26.215267    4935 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0728 18:37:26.215735    4935 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0728 18:37:26.216880    4935 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0728 18:37:26.216909    4935 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:37:26.217840    4935 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0728 18:37:26.217842    4935 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0728 18:37:26.218490    4935 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:37:26.219070    4935 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0728 18:37:26.584452    4935 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0728 18:37:26.596287    4935 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0728 18:37:26.596307    4935 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0728 18:37:26.596360    4935 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0728 18:37:26.599692    4935 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0728 18:37:26.609023    4935 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0728 18:37:26.614601    4935 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0728 18:37:26.614619    4935 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0728 18:37:26.614668    4935 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0728 18:37:26.624586    4935 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0728 18:37:26.625435    4935 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0728 18:37:26.635309    4935 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0728 18:37:26.635330    4935 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0728 18:37:26.635381    4935 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0728 18:37:26.645802    4935 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0728 18:37:26.649779    4935 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0728 18:37:26.660122    4935 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0728 18:37:26.660142    4935 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0728 18:37:26.660193    4935 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0728 18:37:26.660508    4935 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0728 18:37:26.672331    4935 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0728 18:37:26.672353    4935 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0728 18:37:26.672390    4935 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0728 18:37:26.672408    4935 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0728 18:37:26.681744    4935 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0728 18:37:26.681857    4935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0728 18:37:26.684584    4935 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0728 18:37:26.684594    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0728 18:37:26.692189    4935 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0728 18:37:26.692198    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0728 18:37:26.705260    4935 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0728 18:37:26.705378    4935 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:37:26.714552    4935 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0728 18:37:26.722206    4935 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0728 18:37:26.728336    4935 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0728 18:37:26.728358    4935 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:37:26.728412    4935 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0728 18:37:26.738162    4935 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0728 18:37:26.738188    4935 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0728 18:37:26.738244    4935 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0728 18:37:26.742720    4935 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0728 18:37:26.742856    4935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0728 18:37:26.751924    4935 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0728 18:37:26.751925    4935 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0728 18:37:26.751959    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0728 18:37:26.793073    4935 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0728 18:37:26.793087    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0728 18:37:26.829612    4935 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0728 18:37:26.978159    4935 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0728 18:37:26.978315    4935 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:37:26.996823    4935 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0728 18:37:26.996851    4935 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:37:26.996923    4935 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:37:27.014660    4935 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0728 18:37:27.014786    4935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0728 18:37:27.016347    4935 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0728 18:37:27.016360    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0728 18:37:27.045032    4935 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0728 18:37:27.045046    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0728 18:37:27.288746    4935 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0728 18:37:27.288788    4935 cache_images.go:92] duration metric: took 1.087280917s to LoadCachedImages
	W0728 18:37:27.288826    4935 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0728 18:37:27.288833    4935 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0728 18:37:27.288879    4935 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-278000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0728 18:37:27.288937    4935 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 18:37:27.302901    4935 cni.go:84] Creating CNI manager for ""
	I0728 18:37:27.302914    4935 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:37:27.302918    4935 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0728 18:37:27.302927    4935 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-278000 NodeName:stopped-upgrade-278000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0728 18:37:27.302996    4935 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-278000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 18:37:27.303045    4935 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0728 18:37:27.306244    4935 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 18:37:27.306277    4935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 18:37:27.309058    4935 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0728 18:37:27.314344    4935 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 18:37:27.319081    4935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0728 18:37:27.324276    4935 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0728 18:37:27.325582    4935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 18:37:27.329426    4935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:37:27.393987    4935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:37:27.404039    4935 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000 for IP: 10.0.2.15
	I0728 18:37:27.404049    4935 certs.go:194] generating shared ca certs ...
	I0728 18:37:27.404058    4935 certs.go:226] acquiring lock for ca certs: {Name:mkc846ff99a644cdf9e42c80143f563c1808731e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:37:27.404224    4935 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.key
	I0728 18:37:27.404287    4935 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/proxy-client-ca.key
	I0728 18:37:27.404296    4935 certs.go:256] generating profile certs ...
	I0728 18:37:27.404377    4935 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/client.key
	I0728 18:37:27.404396    4935 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.key.bc91ceae
	I0728 18:37:27.404407    4935 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.crt.bc91ceae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0728 18:37:27.491632    4935 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.crt.bc91ceae ...
	I0728 18:37:27.491648    4935 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.crt.bc91ceae: {Name:mk7ce09ea1f4e1e0adc458a4492d3e91736b42dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:37:27.493065    4935 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.key.bc91ceae ...
	I0728 18:37:27.493073    4935 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.key.bc91ceae: {Name:mkd7d851e0b6b2aa160e38a41ed99c247a312f74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:37:27.493232    4935 certs.go:381] copying /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.crt.bc91ceae -> /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.crt
	I0728 18:37:27.493394    4935 certs.go:385] copying /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.key.bc91ceae -> /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.key
	I0728 18:37:27.493552    4935 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/proxy-client.key
	I0728 18:37:27.493691    4935 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/1728.pem (1338 bytes)
	W0728 18:37:27.493722    4935 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/1728_empty.pem, impossibly tiny 0 bytes
	I0728 18:37:27.493728    4935 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca-key.pem (1675 bytes)
	I0728 18:37:27.493747    4935 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem (1082 bytes)
	I0728 18:37:27.493766    4935 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem (1123 bytes)
	I0728 18:37:27.493783    4935 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/key.pem (1679 bytes)
	I0728 18:37:27.493819    4935 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1229/.minikube/files/etc/ssl/certs/17282.pem (1708 bytes)
	I0728 18:37:27.494150    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 18:37:27.501137    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 18:37:27.508695    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 18:37:27.516221    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0728 18:37:27.523492    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0728 18:37:27.530161    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 18:37:27.537311    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 18:37:27.544657    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0728 18:37:27.552030    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0728 18:37:27.558730    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/files/etc/ssl/certs/17282.pem --> /usr/share/ca-certificates/17282.pem (1708 bytes)
	I0728 18:37:27.565290    4935 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1229/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 18:37:27.572375    4935 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 18:37:27.577651    4935 ssh_runner.go:195] Run: openssl version
	I0728 18:37:27.579528    4935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0728 18:37:27.582350    4935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0728 18:37:27.583774    4935 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:54 /usr/share/ca-certificates/1728.pem
	I0728 18:37:27.583792    4935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0728 18:37:27.585536    4935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0728 18:37:27.588889    4935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17282.pem && ln -fs /usr/share/ca-certificates/17282.pem /etc/ssl/certs/17282.pem"
	I0728 18:37:27.592280    4935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17282.pem
	I0728 18:37:27.593734    4935 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:54 /usr/share/ca-certificates/17282.pem
	I0728 18:37:27.593750    4935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17282.pem
	I0728 18:37:27.595642    4935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17282.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 18:37:27.598480    4935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 18:37:27.601420    4935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:37:27.602807    4935 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:46 /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:37:27.602824    4935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 18:37:27.604396    4935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 18:37:27.607380    4935 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0728 18:37:27.608870    4935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0728 18:37:27.610828    4935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0728 18:37:27.612557    4935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0728 18:37:27.614631    4935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0728 18:37:27.616375    4935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0728 18:37:27.618194    4935 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0728 18:37:27.619992    4935 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50479 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0728 18:37:27.620069    4935 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 18:37:27.630345    4935 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 18:37:27.633588    4935 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0728 18:37:27.633593    4935 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0728 18:37:27.633612    4935 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 18:37:27.637256    4935 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 18:37:27.637566    4935 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-278000" does not appear in /Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:37:27.637674    4935 kubeconfig.go:62] /Users/jenkins/minikube-integration/19312-1229/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-278000" cluster setting kubeconfig missing "stopped-upgrade-278000" context setting]
	I0728 18:37:27.637862    4935 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/kubeconfig: {Name:mk193de249a2c701b098e889c731f2b64761e39f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:37:27.638311    4935 kapi.go:59] client config for stopped-upgrade-278000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023945c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:37:27.638638    4935 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 18:37:27.641470    4935 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-278000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0728 18:37:27.641477    4935 kubeadm.go:1160] stopping kube-system containers ...
	I0728 18:37:27.641517    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 18:37:27.652570    4935 docker.go:483] Stopping containers: [912ef6eb9272 248ada8e5eb9 28fa0bcdbb2a b959039eb684 0ffba4e92043 988ccb20029d c67d661575ed ed9398b7868e]
	I0728 18:37:27.652632    4935 ssh_runner.go:195] Run: docker stop 912ef6eb9272 248ada8e5eb9 28fa0bcdbb2a b959039eb684 0ffba4e92043 988ccb20029d c67d661575ed ed9398b7868e
	I0728 18:37:27.663715    4935 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 18:37:27.669085    4935 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 18:37:27.672501    4935 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 18:37:27.672508    4935 kubeadm.go:157] found existing configuration files:
	
	I0728 18:37:27.672539    4935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/admin.conf
	I0728 18:37:27.675477    4935 kubeadm.go:163] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0728 18:37:27.675498    4935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0728 18:37:27.677977    4935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/kubelet.conf
	I0728 18:37:27.680724    4935 kubeadm.go:163] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0728 18:37:27.680748    4935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0728 18:37:27.683761    4935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/controller-manager.conf
	I0728 18:37:27.686129    4935 kubeadm.go:163] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0728 18:37:27.686152    4935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 18:37:27.689000    4935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/scheduler.conf
	I0728 18:37:27.691989    4935 kubeadm.go:163] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0728 18:37:27.692016    4935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 18:37:27.694524    4935 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 18:37:27.697363    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:37:27.719565    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:37:28.156996    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:37:28.270076    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:37:28.292612    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 18:37:28.318480    4935 api_server.go:52] waiting for apiserver process to appear ...
	I0728 18:37:28.318555    4935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:37:28.819287    4935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:37:29.319426    4935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:37:29.323734    4935 api_server.go:72] duration metric: took 1.005257042s to wait for apiserver process to appear ...
	I0728 18:37:29.323742    4935 api_server.go:88] waiting for apiserver healthz status ...
	I0728 18:37:29.323750    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:34.324539    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:34.324561    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:39.325805    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:39.325871    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:44.326393    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:44.326418    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:49.326790    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:49.326818    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:54.327279    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:54.327316    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:37:59.328000    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:37:59.328033    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:04.328825    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:04.328842    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:09.329805    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:09.329827    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:14.330552    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:14.330573    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:19.331974    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:19.332009    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:24.332722    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:24.332744    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:29.324884    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:29.325057    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:38:29.345181    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:38:29.345259    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:38:29.355742    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:38:29.355811    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:38:29.366642    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:38:29.366714    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:38:29.377746    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:38:29.377817    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:38:29.388353    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:38:29.388429    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:38:29.398750    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:38:29.398832    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:38:29.409066    4935 logs.go:276] 0 containers: []
	W0728 18:38:29.409078    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:38:29.409135    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:38:29.419308    4935 logs.go:276] 0 containers: []
	W0728 18:38:29.419324    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:38:29.419332    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:38:29.419338    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:38:29.423837    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:38:29.423844    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:38:29.464008    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:38:29.464016    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:38:29.475868    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:38:29.475878    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:38:29.491290    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:38:29.491304    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:38:29.509303    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:38:29.509318    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:38:29.523316    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:38:29.523329    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:38:29.549878    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:38:29.549892    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:38:29.568737    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:38:29.568751    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:38:29.579845    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:38:29.579856    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:38:29.605191    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:38:29.605209    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:38:29.617322    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:38:29.617334    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:38:29.702188    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:38:29.702201    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:38:29.716761    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:38:29.716775    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:38:29.731994    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:38:29.732006    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:38:32.241939    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:37.238320    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:37.238482    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:38:37.249456    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:38:37.249544    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:38:37.260739    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:38:37.260815    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:38:37.271569    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:38:37.271639    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:38:37.282187    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:38:37.282278    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:38:37.292862    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:38:37.292935    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:38:37.307311    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:38:37.307384    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:38:37.317520    4935 logs.go:276] 0 containers: []
	W0728 18:38:37.317535    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:38:37.317599    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:38:37.328078    4935 logs.go:276] 0 containers: []
	W0728 18:38:37.328088    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:38:37.328097    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:38:37.328103    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:38:37.339849    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:38:37.339861    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:38:37.344692    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:38:37.344698    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:38:37.382055    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:38:37.382069    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:38:37.396272    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:38:37.396282    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:38:37.414781    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:38:37.414792    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:38:37.427841    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:38:37.427852    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:38:37.452925    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:38:37.452936    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:38:37.466937    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:38:37.466947    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:38:37.479129    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:38:37.479139    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:38:37.493512    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:38:37.493521    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:38:37.504845    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:38:37.504857    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:38:37.529734    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:38:37.529742    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:38:37.566158    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:38:37.566165    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:38:37.577951    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:38:37.577961    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:38:40.094936    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:45.092877    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:45.093122    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:38:45.113360    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:38:45.113447    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:38:45.127824    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:38:45.127909    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:38:45.138956    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:38:45.139027    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:38:45.149932    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:38:45.149997    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:38:45.160748    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:38:45.160817    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:38:45.172430    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:38:45.172508    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:38:45.183253    4935 logs.go:276] 0 containers: []
	W0728 18:38:45.183267    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:38:45.183326    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:38:45.193064    4935 logs.go:276] 0 containers: []
	W0728 18:38:45.193076    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:38:45.193085    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:38:45.193090    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:38:45.197870    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:38:45.197877    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:38:45.232535    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:38:45.232548    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:38:45.246194    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:38:45.246203    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:38:45.271734    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:38:45.271746    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:38:45.289559    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:38:45.289568    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:38:45.328832    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:38:45.328842    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:38:45.347292    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:38:45.347302    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:38:45.363590    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:38:45.363599    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:38:45.387161    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:38:45.387169    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:38:45.398581    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:38:45.398592    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:38:45.415675    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:38:45.415686    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:38:45.433325    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:38:45.433339    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:38:45.444591    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:38:45.444603    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:38:45.456369    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:38:45.456380    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:38:47.971482    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:38:52.971621    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:38:52.971724    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:38:52.982981    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:38:52.983061    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:38:52.993295    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:38:52.993370    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:38:53.004192    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:38:53.004265    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:38:53.014647    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:38:53.014713    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:38:53.025073    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:38:53.025131    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:38:53.035327    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:38:53.035397    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:38:53.045570    4935 logs.go:276] 0 containers: []
	W0728 18:38:53.045586    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:38:53.045648    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:38:53.055601    4935 logs.go:276] 0 containers: []
	W0728 18:38:53.055613    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:38:53.055623    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:38:53.055629    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:38:53.095144    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:38:53.095156    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:38:53.108375    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:38:53.108385    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:38:53.133456    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:38:53.133468    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:38:53.148165    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:38:53.148176    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:38:53.160037    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:38:53.160051    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:38:53.178000    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:38:53.178012    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:38:53.182240    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:38:53.182246    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:38:53.196934    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:38:53.196945    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:38:53.208632    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:38:53.208643    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:38:53.220801    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:38:53.220812    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:38:53.244515    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:38:53.244523    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:38:53.281102    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:38:53.281112    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:38:53.295041    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:38:53.295053    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:38:53.314630    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:38:53.314643    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:38:55.832036    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:00.833034    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:00.833253    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:00.852565    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:39:00.852657    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:00.868068    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:39:00.868150    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:00.884768    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:39:00.884827    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:00.895740    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:39:00.895811    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:00.905821    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:39:00.905894    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:00.921448    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:39:00.921516    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:00.932210    4935 logs.go:276] 0 containers: []
	W0728 18:39:00.932221    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:00.932284    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:00.942414    4935 logs.go:276] 0 containers: []
	W0728 18:39:00.942426    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:39:00.942433    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:00.942439    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:00.981223    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:39:00.981238    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:39:00.992618    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:39:00.992634    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:39:01.008204    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:01.008214    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:01.046598    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:39:01.046606    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:39:01.060159    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:39:01.060174    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:39:01.074850    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:01.074859    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:01.098739    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:39:01.098747    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:39:01.112997    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:39:01.113007    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:39:01.132550    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:39:01.132559    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:39:01.144468    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:39:01.144478    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:39:01.162090    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:01.162104    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:01.167006    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:39:01.167014    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:39:01.196247    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:39:01.196262    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:39:01.211076    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:39:01.211090    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:03.730419    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:08.732017    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:08.732276    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:08.755877    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:39:08.756000    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:08.772322    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:39:08.772405    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:08.784607    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:39:08.784678    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:08.803228    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:39:08.803296    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:08.813860    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:39:08.813920    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:08.824334    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:39:08.824404    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:08.834237    4935 logs.go:276] 0 containers: []
	W0728 18:39:08.834248    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:08.834300    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:08.849233    4935 logs.go:276] 0 containers: []
	W0728 18:39:08.849245    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:39:08.849252    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:39:08.849257    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:39:08.863983    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:39:08.863996    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:39:08.875043    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:39:08.875053    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:39:08.887369    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:08.887382    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:08.931707    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:39:08.931721    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:39:08.957555    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:39:08.957566    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:39:08.969444    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:08.969453    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:08.994533    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:08.994542    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:09.030190    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:39:09.030201    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:39:09.043746    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:09.043762    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:09.047806    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:39:09.047815    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:39:09.065423    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:39:09.065436    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:39:09.080059    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:39:09.080073    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:39:09.097733    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:39:09.097743    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:39:09.111524    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:39:09.111538    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:11.626478    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:16.628248    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:16.628443    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:16.651647    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:39:16.651751    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:16.666183    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:39:16.666266    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:16.678368    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:39:16.678433    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:16.689508    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:39:16.689583    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:16.700341    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:39:16.700414    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:16.710805    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:39:16.710870    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:16.720612    4935 logs.go:276] 0 containers: []
	W0728 18:39:16.720622    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:16.720679    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:16.730729    4935 logs.go:276] 0 containers: []
	W0728 18:39:16.730739    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:39:16.730746    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:16.730752    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:16.769601    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:16.769609    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:16.773571    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:39:16.773577    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:39:16.787894    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:39:16.787904    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:39:16.800129    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:16.800139    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:16.825136    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:16.825143    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:16.859977    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:39:16.859988    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:39:16.875413    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:39:16.875424    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:39:16.892924    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:39:16.892935    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:39:16.912503    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:39:16.912517    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:39:16.933821    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:39:16.933834    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:16.948627    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:39:16.948638    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:39:16.978911    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:39:16.978921    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:39:16.992748    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:39:16.992758    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:39:17.003885    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:39:17.003898    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:39:19.519815    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:24.521696    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:24.521883    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:24.538732    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:39:24.538815    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:24.551436    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:39:24.551511    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:24.569429    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:39:24.569485    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:24.579961    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:39:24.580032    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:24.590451    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:39:24.590512    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:24.601165    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:39:24.601224    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:24.611572    4935 logs.go:276] 0 containers: []
	W0728 18:39:24.611589    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:24.611639    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:24.621639    4935 logs.go:276] 0 containers: []
	W0728 18:39:24.621650    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:39:24.621658    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:24.621664    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:24.625770    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:39:24.625779    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:39:24.637226    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:24.637236    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:24.673280    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:39:24.673293    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:39:24.687598    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:39:24.687611    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:39:24.703934    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:24.703944    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:24.727506    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:39:24.727514    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:39:24.742457    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:39:24.742472    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:39:24.756434    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:39:24.756444    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:39:24.779945    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:39:24.779955    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:39:24.792226    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:39:24.792238    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:39:24.810538    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:39:24.810549    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:24.822401    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:24.822413    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:24.861387    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:39:24.861396    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:39:24.885575    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:39:24.885585    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:39:27.398429    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:32.400511    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:32.400685    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:32.418383    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:39:32.418469    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:32.432246    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:39:32.432323    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:32.443363    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:39:32.443429    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:32.453910    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:39:32.453985    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:32.464401    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:39:32.464473    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:32.475572    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:39:32.475644    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:32.486224    4935 logs.go:276] 0 containers: []
	W0728 18:39:32.486235    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:32.486293    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:32.496222    4935 logs.go:276] 0 containers: []
	W0728 18:39:32.496236    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:39:32.496244    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:39:32.496250    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:39:32.521781    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:39:32.521794    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:39:32.540169    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:39:32.540179    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:39:32.552061    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:39:32.552071    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:39:32.566137    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:39:32.566146    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:32.578029    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:32.578040    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:32.582254    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:39:32.582261    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:39:32.596493    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:39:32.596504    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:39:32.612182    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:39:32.612193    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:39:32.626787    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:39:32.626801    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:39:32.651933    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:32.651943    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:32.677475    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:32.677486    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:32.716573    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:32.716582    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:32.775915    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:39:32.775925    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:39:32.790554    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:39:32.790569    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:39:35.303542    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:40.305618    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:40.305839    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:40.324367    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:39:40.324456    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:40.337695    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:39:40.337770    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:40.349213    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:39:40.349284    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:40.359924    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:39:40.359993    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:40.370939    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:39:40.371012    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:40.381639    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:39:40.381710    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:40.392104    4935 logs.go:276] 0 containers: []
	W0728 18:39:40.392114    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:40.392173    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:40.402652    4935 logs.go:276] 0 containers: []
	W0728 18:39:40.402665    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:39:40.402673    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:39:40.402679    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:39:40.416664    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:39:40.416674    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:39:40.428395    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:40.428406    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:40.467743    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:40.467753    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:40.472226    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:39:40.472233    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:39:40.496790    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:39:40.496803    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:39:40.511522    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:39:40.511537    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:39:40.529127    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:39:40.529137    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:39:40.547324    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:39:40.547334    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:40.559139    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:40.559149    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:40.595251    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:39:40.595262    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:39:40.609668    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:39:40.609678    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:39:40.624667    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:39:40.624676    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:39:40.639180    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:39:40.639189    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:39:40.656260    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:40.656271    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:43.182323    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:48.184543    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:48.184918    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:48.213021    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:39:48.213151    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:48.230639    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:39:48.230727    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:48.246501    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:39:48.246573    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:48.265438    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:39:48.265502    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:48.276306    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:39:48.276368    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:48.287389    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:39:48.287453    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:48.297879    4935 logs.go:276] 0 containers: []
	W0728 18:39:48.297892    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:48.297945    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:48.307900    4935 logs.go:276] 0 containers: []
	W0728 18:39:48.307912    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:39:48.307920    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:48.307927    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:48.343552    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:39:48.343563    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:39:48.365254    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:39:48.365263    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:39:48.382433    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:39:48.382449    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:39:48.396335    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:39:48.396349    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:39:48.414006    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:39:48.414019    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:39:48.425111    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:48.425120    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:48.448046    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:39:48.448053    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:48.461885    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:39:48.461896    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:39:48.473008    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:48.473017    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:48.512120    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:48.512133    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:48.516572    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:39:48.516579    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:39:48.530312    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:39:48.530321    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:39:48.555183    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:39:48.555192    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:39:48.566839    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:39:48.566853    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:39:51.083423    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:39:56.084315    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:39:56.084644    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:39:56.123923    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:39:56.124072    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:39:56.145110    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:39:56.145205    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:39:56.160024    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:39:56.160104    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:39:56.172714    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:39:56.172790    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:39:56.183745    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:39:56.183811    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:39:56.195987    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:39:56.196061    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:39:56.206510    4935 logs.go:276] 0 containers: []
	W0728 18:39:56.206521    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:39:56.206578    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:39:56.216519    4935 logs.go:276] 0 containers: []
	W0728 18:39:56.216533    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:39:56.216541    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:39:56.216547    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:39:56.239828    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:39:56.239840    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:39:56.251550    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:39:56.251563    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:39:56.266129    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:39:56.266139    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:39:56.277615    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:39:56.277626    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:39:56.295349    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:39:56.295362    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:39:56.306841    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:39:56.306851    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:39:56.346145    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:39:56.346153    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:39:56.350291    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:39:56.350298    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:39:56.369001    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:39:56.369011    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:39:56.403804    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:39:56.403814    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:39:56.417762    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:39:56.417775    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:39:56.431469    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:39:56.431482    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:39:56.456345    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:39:56.456356    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:39:56.471606    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:39:56.471620    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:39:58.985989    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:03.988269    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:03.988621    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:04.019049    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:40:04.019173    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:04.036527    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:40:04.036628    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:04.050245    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:40:04.050350    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:04.061992    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:40:04.062060    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:04.072761    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:40:04.072825    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:04.083900    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:40:04.083964    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:04.094885    4935 logs.go:276] 0 containers: []
	W0728 18:40:04.094903    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:04.094959    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:04.105165    4935 logs.go:276] 0 containers: []
	W0728 18:40:04.105175    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:40:04.105183    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:40:04.105190    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:40:04.119968    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:40:04.119978    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:40:04.131910    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:40:04.131922    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:04.143764    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:04.143775    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:04.181240    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:40:04.181260    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:40:04.196095    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:40:04.196108    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:40:04.211612    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:40:04.211625    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:40:04.223754    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:40:04.223766    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:40:04.243813    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:04.243827    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:04.268461    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:04.268468    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:04.304433    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:40:04.304445    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:40:04.330396    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:40:04.330407    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:40:04.343777    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:04.343787    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:04.348278    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:40:04.348286    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:40:04.362808    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:40:04.362819    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:40:06.876507    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:11.878718    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:11.878890    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:11.891184    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:40:11.891267    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:11.902019    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:40:11.902098    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:11.912478    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:40:11.912539    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:11.922838    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:40:11.922913    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:11.933319    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:40:11.933385    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:11.947258    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:40:11.947326    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:11.958735    4935 logs.go:276] 0 containers: []
	W0728 18:40:11.958746    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:11.958810    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:11.968745    4935 logs.go:276] 0 containers: []
	W0728 18:40:11.968756    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:40:11.968764    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:40:11.968770    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:11.980472    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:11.980486    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:12.019491    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:12.019500    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:12.053762    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:40:12.053775    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:40:12.079172    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:40:12.079183    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:40:12.094179    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:12.094187    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:12.119319    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:40:12.119328    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:40:12.134805    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:40:12.134816    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:40:12.146277    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:40:12.146289    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:40:12.158086    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:40:12.158096    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:40:12.175711    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:12.175725    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:12.180168    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:40:12.180174    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:40:12.194206    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:40:12.194220    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:40:12.208188    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:40:12.208201    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:40:12.223314    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:40:12.223323    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:40:14.741086    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:19.743345    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:19.743571    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:19.766935    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:40:19.767055    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:19.784755    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:40:19.784838    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:19.799790    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:40:19.799863    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:19.811005    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:40:19.811081    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:19.821680    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:40:19.821753    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:19.832029    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:40:19.832094    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:19.842231    4935 logs.go:276] 0 containers: []
	W0728 18:40:19.842245    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:19.842310    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:19.852144    4935 logs.go:276] 0 containers: []
	W0728 18:40:19.852157    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:40:19.852166    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:19.852172    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:19.856627    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:40:19.856634    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:40:19.871232    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:40:19.871243    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:40:19.885476    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:40:19.885487    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:40:19.896392    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:40:19.896403    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:40:19.912202    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:19.912212    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:19.934628    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:40:19.934636    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:19.946132    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:19.946142    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:19.983856    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:40:19.983867    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:40:20.008543    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:40:20.008559    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:40:20.022776    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:40:20.022786    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:40:20.034425    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:20.034435    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:20.068930    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:40:20.068940    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:40:20.087604    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:40:20.087615    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:40:20.099543    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:40:20.099558    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:40:22.619508    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:27.621773    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:27.621940    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:27.639410    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:40:27.639495    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:27.655136    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:40:27.655209    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:27.666934    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:40:27.666997    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:27.677366    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:40:27.677428    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:27.687689    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:40:27.687754    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:27.705645    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:40:27.705708    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:27.715633    4935 logs.go:276] 0 containers: []
	W0728 18:40:27.715646    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:27.715700    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:27.725382    4935 logs.go:276] 0 containers: []
	W0728 18:40:27.725393    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:40:27.725401    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:27.725406    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:27.729673    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:40:27.729680    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:40:27.743521    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:40:27.743533    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:40:27.762300    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:27.762310    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:27.784958    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:40:27.784965    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:40:27.810007    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:40:27.810018    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:40:27.823985    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:40:27.823995    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:40:27.835552    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:40:27.835564    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:40:27.852806    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:27.852816    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:27.888887    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:27.888894    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:27.923366    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:40:27.923379    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:40:27.936168    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:40:27.936180    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:40:27.951564    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:40:27.951575    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:40:27.963582    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:40:27.963596    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:40:27.976776    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:40:27.976789    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:30.490460    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:35.492815    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:35.492975    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:35.509171    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:40:35.509258    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:35.521699    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:40:35.521835    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:35.532824    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:40:35.532895    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:35.551257    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:40:35.551335    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:35.562229    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:40:35.562301    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:35.577198    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:40:35.577265    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:35.587671    4935 logs.go:276] 0 containers: []
	W0728 18:40:35.587681    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:35.587738    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:35.597951    4935 logs.go:276] 0 containers: []
	W0728 18:40:35.597962    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:40:35.597970    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:40:35.597998    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:40:35.620194    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:40:35.620205    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:40:35.631914    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:40:35.631927    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:40:35.648627    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:40:35.648639    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:40:35.660652    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:40:35.660663    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:40:35.674975    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:40:35.674985    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:40:35.686582    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:40:35.686593    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:40:35.700814    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:35.700824    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:35.724087    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:35.724095    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:35.728285    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:40:35.728292    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:40:35.743130    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:40:35.743141    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:40:35.757318    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:35.757331    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:35.795832    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:40:35.795840    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:40:35.820238    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:40:35.820250    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:35.832108    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:35.832118    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:38.369465    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:43.371782    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:43.371952    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:43.384282    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:40:43.384351    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:43.396464    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:40:43.396532    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:43.406718    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:40:43.406776    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:43.417254    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:40:43.417319    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:43.428082    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:40:43.428166    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:43.448145    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:40:43.448204    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:43.458468    4935 logs.go:276] 0 containers: []
	W0728 18:40:43.458480    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:43.458541    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:43.469676    4935 logs.go:276] 0 containers: []
	W0728 18:40:43.469687    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:40:43.469696    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:40:43.469702    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:40:43.491808    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:40:43.491818    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:40:43.517069    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:40:43.517082    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:40:43.530783    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:43.530794    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:43.555292    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:40:43.555303    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:43.567187    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:43.567198    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:43.571835    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:43.571842    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:43.606768    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:40:43.606781    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:40:43.618683    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:40:43.618696    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:40:43.641411    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:40:43.641421    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:40:43.656866    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:43.656876    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:43.695048    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:40:43.695056    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:40:43.706221    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:40:43.706231    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:40:43.721851    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:40:43.721865    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:40:43.740218    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:40:43.740231    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:40:46.256512    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:51.258931    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:51.259140    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:51.282968    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:40:51.283089    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:51.298979    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:40:51.299059    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:51.311570    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:40:51.311648    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:51.322394    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:40:51.322461    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:51.332427    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:40:51.332499    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:51.344828    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:40:51.344902    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:51.355164    4935 logs.go:276] 0 containers: []
	W0728 18:40:51.355173    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:51.355228    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:51.365203    4935 logs.go:276] 0 containers: []
	W0728 18:40:51.365215    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:40:51.365223    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:40:51.365229    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:40:51.376216    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:51.376231    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:51.400420    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:51.400431    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:51.404831    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:40:51.404837    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:40:51.430333    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:40:51.430355    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:40:51.448519    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:51.448533    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:51.486410    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:40:51.486426    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:51.499196    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:40:51.499208    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:40:51.513299    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:40:51.513311    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:40:51.527062    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:40:51.527072    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:40:51.551162    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:40:51.551174    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:40:51.571986    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:40:51.572000    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:40:51.585949    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:51.585960    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:51.622912    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:40:51.622923    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:40:51.637202    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:40:51.637213    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:40:54.155101    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:40:59.157314    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:40:59.157421    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:40:59.168273    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:40:59.168352    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:40:59.179244    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:40:59.179315    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:40:59.191455    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:40:59.191529    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:40:59.202575    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:40:59.202652    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:40:59.213323    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:40:59.213394    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:40:59.223714    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:40:59.223782    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:40:59.234074    4935 logs.go:276] 0 containers: []
	W0728 18:40:59.234086    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:40:59.234137    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:40:59.244663    4935 logs.go:276] 0 containers: []
	W0728 18:40:59.244672    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:40:59.244681    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:40:59.244686    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:40:59.280072    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:40:59.280083    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:40:59.292815    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:40:59.292825    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:40:59.316188    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:40:59.316195    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:40:59.330099    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:40:59.330109    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:40:59.342123    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:40:59.342134    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:40:59.354226    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:40:59.354236    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:40:59.370048    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:40:59.370059    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:40:59.388540    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:40:59.388550    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:40:59.405817    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:40:59.405827    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:40:59.420295    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:40:59.420305    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:40:59.435398    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:40:59.435408    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:40:59.449498    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:40:59.449508    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:40:59.488352    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:40:59.488360    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:40:59.492271    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:40:59.492278    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:41:02.018806    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:07.021160    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:07.021350    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:07.041182    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:41:07.041275    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:07.055317    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:41:07.055394    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:07.067572    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:41:07.067643    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:07.077974    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:41:07.078038    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:07.088307    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:41:07.088365    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:07.101467    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:41:07.101551    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:07.111424    4935 logs.go:276] 0 containers: []
	W0728 18:41:07.111442    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:07.111519    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:07.126281    4935 logs.go:276] 0 containers: []
	W0728 18:41:07.126296    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:41:07.126304    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:41:07.126310    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:41:07.137819    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:07.137829    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:07.160661    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:41:07.160674    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:07.173313    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:41:07.173324    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:41:07.185416    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:07.185426    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:07.189965    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:07.189971    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:07.224345    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:41:07.224354    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:41:07.238503    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:41:07.238513    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:41:07.250622    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:07.250635    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:07.290621    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:41:07.290633    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:41:07.305245    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:41:07.305258    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:41:07.322769    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:41:07.322779    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:41:07.352233    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:41:07.352243    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:41:07.367018    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:41:07.367031    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:41:07.381662    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:41:07.381671    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:41:09.899585    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:14.902000    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:14.902253    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:14.928770    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:41:14.928877    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:14.946961    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:41:14.947054    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:14.961299    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:41:14.961377    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:14.972856    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:41:14.972926    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:14.983253    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:41:14.983314    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:14.993571    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:41:14.993630    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:15.004408    4935 logs.go:276] 0 containers: []
	W0728 18:41:15.004420    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:15.004479    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:15.014579    4935 logs.go:276] 0 containers: []
	W0728 18:41:15.014590    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:41:15.014598    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:41:15.014602    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:41:15.026359    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:15.026372    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:15.050669    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:41:15.050678    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:41:15.075274    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:41:15.075286    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:41:15.089482    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:41:15.089493    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:41:15.101284    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:41:15.101299    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:41:15.116228    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:41:15.116239    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:15.127464    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:41:15.127476    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:41:15.141371    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:41:15.141382    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:41:15.156079    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:15.156092    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:15.190616    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:41:15.190627    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:41:15.204890    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:41:15.204900    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:41:15.222186    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:41:15.222196    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:41:15.242381    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:15.242393    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:15.280896    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:15.280905    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:17.785971    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:22.786507    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:22.786728    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:41:22.809779    4935 logs.go:276] 2 containers: [57bf79d9f4a0 912ef6eb9272]
	I0728 18:41:22.809906    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:41:22.825097    4935 logs.go:276] 2 containers: [0043dffc83dd 248ada8e5eb9]
	I0728 18:41:22.825172    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:41:22.837951    4935 logs.go:276] 1 containers: [e23b3ec4e2dc]
	I0728 18:41:22.838014    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:41:22.848517    4935 logs.go:276] 2 containers: [6c659fe93621 28fa0bcdbb2a]
	I0728 18:41:22.848595    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:41:22.858647    4935 logs.go:276] 1 containers: [74d77ffac754]
	I0728 18:41:22.858715    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:41:22.869540    4935 logs.go:276] 2 containers: [13581d913484 b959039eb684]
	I0728 18:41:22.869607    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:41:22.879736    4935 logs.go:276] 0 containers: []
	W0728 18:41:22.879746    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:41:22.879798    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:41:22.889919    4935 logs.go:276] 0 containers: []
	W0728 18:41:22.889931    4935 logs.go:278] No container was found matching "storage-provisioner"
	I0728 18:41:22.889939    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:41:22.889947    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:41:22.902164    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:41:22.902174    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:41:22.906785    4935 logs.go:123] Gathering logs for kube-proxy [74d77ffac754] ...
	I0728 18:41:22.906791    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d77ffac754"
	I0728 18:41:22.918471    4935 logs.go:123] Gathering logs for kube-scheduler [28fa0bcdbb2a] ...
	I0728 18:41:22.918482    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28fa0bcdbb2a"
	I0728 18:41:22.933409    4935 logs.go:123] Gathering logs for kube-controller-manager [b959039eb684] ...
	I0728 18:41:22.933419    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b959039eb684"
	I0728 18:41:22.947348    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:41:22.947359    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:41:22.986114    4935 logs.go:123] Gathering logs for etcd [248ada8e5eb9] ...
	I0728 18:41:22.986121    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 248ada8e5eb9"
	I0728 18:41:23.000110    4935 logs.go:123] Gathering logs for coredns [e23b3ec4e2dc] ...
	I0728 18:41:23.000121    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23b3ec4e2dc"
	I0728 18:41:23.011120    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:41:23.011132    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:41:23.035813    4935 logs.go:123] Gathering logs for kube-apiserver [57bf79d9f4a0] ...
	I0728 18:41:23.035824    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57bf79d9f4a0"
	I0728 18:41:23.049481    4935 logs.go:123] Gathering logs for kube-apiserver [912ef6eb9272] ...
	I0728 18:41:23.049492    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 912ef6eb9272"
	I0728 18:41:23.074331    4935 logs.go:123] Gathering logs for kube-scheduler [6c659fe93621] ...
	I0728 18:41:23.074342    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c659fe93621"
	I0728 18:41:23.085998    4935 logs.go:123] Gathering logs for kube-controller-manager [13581d913484] ...
	I0728 18:41:23.086010    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581d913484"
	I0728 18:41:23.103396    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:41:23.103407    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:41:23.137782    4935 logs.go:123] Gathering logs for etcd [0043dffc83dd] ...
	I0728 18:41:23.137794    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0043dffc83dd"
	I0728 18:41:25.652649    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:30.653197    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:30.653267    4935 kubeadm.go:597] duration metric: took 4m3.05908775s to restartPrimaryControlPlane
	W0728 18:41:30.653321    4935 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0728 18:41:30.653342    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0728 18:41:31.617561    4935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 18:41:31.622782    4935 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 18:41:31.626019    4935 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 18:41:31.628938    4935 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 18:41:31.628945    4935 kubeadm.go:157] found existing configuration files:
	
	I0728 18:41:31.628969    4935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/admin.conf
	I0728 18:41:31.631384    4935 kubeadm.go:163] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0728 18:41:31.631405    4935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0728 18:41:31.634125    4935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/kubelet.conf
	I0728 18:41:31.637308    4935 kubeadm.go:163] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0728 18:41:31.637332    4935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0728 18:41:31.639942    4935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/controller-manager.conf
	I0728 18:41:31.642534    4935 kubeadm.go:163] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0728 18:41:31.642556    4935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 18:41:31.645905    4935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/scheduler.conf
	I0728 18:41:31.648857    4935 kubeadm.go:163] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0728 18:41:31.648881    4935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 18:41:31.651447    4935 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0728 18:41:31.667056    4935 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0728 18:41:31.667086    4935 kubeadm.go:310] [preflight] Running pre-flight checks
	I0728 18:41:31.723109    4935 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0728 18:41:31.723202    4935 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0728 18:41:31.723268    4935 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0728 18:41:31.772672    4935 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0728 18:41:31.780855    4935 out.go:204]   - Generating certificates and keys ...
	I0728 18:41:31.780886    4935 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0728 18:41:31.780916    4935 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0728 18:41:31.780949    4935 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0728 18:41:31.780974    4935 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0728 18:41:31.781002    4935 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0728 18:41:31.781025    4935 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0728 18:41:31.781051    4935 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0728 18:41:31.781076    4935 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0728 18:41:31.781106    4935 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0728 18:41:31.781136    4935 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0728 18:41:31.781151    4935 kubeadm.go:310] [certs] Using the existing "sa" key
	I0728 18:41:31.781173    4935 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0728 18:41:31.812066    4935 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0728 18:41:31.997348    4935 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0728 18:41:32.052177    4935 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0728 18:41:32.133598    4935 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0728 18:41:32.162721    4935 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0728 18:41:32.162769    4935 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0728 18:41:32.162790    4935 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0728 18:41:32.230057    4935 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0728 18:41:32.234291    4935 out.go:204]   - Booting up control plane ...
	I0728 18:41:32.234340    4935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0728 18:41:32.234380    4935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0728 18:41:32.234436    4935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0728 18:41:32.234479    4935 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0728 18:41:32.234648    4935 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0728 18:41:36.233639    4935 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001079 seconds
	I0728 18:41:36.233716    4935 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0728 18:41:36.239490    4935 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0728 18:41:36.754069    4935 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0728 18:41:36.754264    4935 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-278000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0728 18:41:37.257871    4935 kubeadm.go:310] [bootstrap-token] Using token: yanhle.k7yavktbovzn0uxp
	I0728 18:41:37.261038    4935 out.go:204]   - Configuring RBAC rules ...
	I0728 18:41:37.261105    4935 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0728 18:41:37.261158    4935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0728 18:41:37.267999    4935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0728 18:41:37.268958    4935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0728 18:41:37.269724    4935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0728 18:41:37.270675    4935 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0728 18:41:37.273756    4935 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0728 18:41:37.429875    4935 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0728 18:41:37.661849    4935 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0728 18:41:37.662341    4935 kubeadm.go:310] 
	I0728 18:41:37.662370    4935 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0728 18:41:37.662376    4935 kubeadm.go:310] 
	I0728 18:41:37.662420    4935 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0728 18:41:37.662428    4935 kubeadm.go:310] 
	I0728 18:41:37.662443    4935 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0728 18:41:37.662472    4935 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0728 18:41:37.662501    4935 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0728 18:41:37.662504    4935 kubeadm.go:310] 
	I0728 18:41:37.662541    4935 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0728 18:41:37.662545    4935 kubeadm.go:310] 
	I0728 18:41:37.662586    4935 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0728 18:41:37.662592    4935 kubeadm.go:310] 
	I0728 18:41:37.662641    4935 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0728 18:41:37.662679    4935 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0728 18:41:37.662719    4935 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0728 18:41:37.662725    4935 kubeadm.go:310] 
	I0728 18:41:37.662774    4935 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0728 18:41:37.662822    4935 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0728 18:41:37.662826    4935 kubeadm.go:310] 
	I0728 18:41:37.662877    4935 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yanhle.k7yavktbovzn0uxp \
	I0728 18:41:37.662939    4935 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c4c1501be84d6e769376a12e79a88eb62c7fa74cf7059e57b30ba292796da81b \
	I0728 18:41:37.662951    4935 kubeadm.go:310] 	--control-plane 
	I0728 18:41:37.662957    4935 kubeadm.go:310] 
	I0728 18:41:37.662995    4935 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0728 18:41:37.662999    4935 kubeadm.go:310] 
	I0728 18:41:37.663036    4935 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yanhle.k7yavktbovzn0uxp \
	I0728 18:41:37.663103    4935 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c4c1501be84d6e769376a12e79a88eb62c7fa74cf7059e57b30ba292796da81b 
	I0728 18:41:37.663174    4935 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0728 18:41:37.663224    4935 cni.go:84] Creating CNI manager for ""
	I0728 18:41:37.663232    4935 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:41:37.665875    4935 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0728 18:41:37.672871    4935 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0728 18:41:37.675688    4935 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0728 18:41:37.680260    4935 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 18:41:37.680298    4935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 18:41:37.680323    4935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-278000 minikube.k8s.io/updated_at=2024_07_28T18_41_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=stopped-upgrade-278000 minikube.k8s.io/primary=true
	I0728 18:41:37.721350    4935 ops.go:34] apiserver oom_adj: -16
	I0728 18:41:37.721342    4935 kubeadm.go:1113] duration metric: took 41.074541ms to wait for elevateKubeSystemPrivileges
	I0728 18:41:37.721449    4935 kubeadm.go:394] duration metric: took 4m10.140951708s to StartCluster
	I0728 18:41:37.721461    4935 settings.go:142] acquiring lock: {Name:mk87b264018a6cee2b66b065d01a79c5a5adf3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:41:37.721557    4935 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:41:37.721961    4935 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/kubeconfig: {Name:mk193de249a2c701b098e889c731f2b64761e39f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:41:37.722445    4935 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:41:37.722546    4935 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:41:37.722530    4935 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0728 18:41:37.722564    4935 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-278000"
	I0728 18:41:37.722577    4935 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-278000"
	W0728 18:41:37.722580    4935 addons.go:243] addon storage-provisioner should already be in state true
	I0728 18:41:37.722591    4935 host.go:66] Checking if "stopped-upgrade-278000" exists ...
	I0728 18:41:37.722598    4935 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-278000"
	I0728 18:41:37.722611    4935 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-278000"
	I0728 18:41:37.722862    4935 retry.go:31] will retry after 950.628276ms: connect: dial unix /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/monitor: connect: connection refused
	I0728 18:41:37.723670    4935 kapi.go:59] client config for stopped-upgrade-278000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/stopped-upgrade-278000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1229/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023945c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 18:41:37.723799    4935 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-278000"
	W0728 18:41:37.723803    4935 addons.go:243] addon default-storageclass should already be in state true
	I0728 18:41:37.723811    4935 host.go:66] Checking if "stopped-upgrade-278000" exists ...
	I0728 18:41:37.724347    4935 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 18:41:37.724351    4935 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 18:41:37.724357    4935 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/id_rsa Username:docker}
	I0728 18:41:37.726801    4935 out.go:177] * Verifying Kubernetes components...
	I0728 18:41:37.734826    4935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 18:41:37.813930    4935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0728 18:41:37.819104    4935 api_server.go:52] waiting for apiserver process to appear ...
	I0728 18:41:37.819150    4935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 18:41:37.820877    4935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 18:41:37.824760    4935 api_server.go:72] duration metric: took 102.300584ms to wait for apiserver process to appear ...
	I0728 18:41:37.824773    4935 api_server.go:88] waiting for apiserver healthz status ...
	I0728 18:41:37.824783    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:38.680297    4935 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 18:41:38.684349    4935 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 18:41:38.684358    4935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 18:41:38.684370    4935 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/stopped-upgrade-278000/id_rsa Username:docker}
	I0728 18:41:38.715457    4935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 18:41:42.826833    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:42.826886    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:47.827265    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:47.827289    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:52.827971    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:52.827998    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:41:57.828485    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:41:57.828528    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:42:02.829208    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:02.829248    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:42:07.830154    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:07.830178    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0728 18:42:08.155442    4935 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0728 18:42:08.162672    4935 out.go:177] * Enabled addons: storage-provisioner
	I0728 18:42:08.168595    4935 addons.go:510] duration metric: took 30.446427542s for enable addons: enabled=[storage-provisioner]
	I0728 18:42:12.831229    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:12.831270    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:42:17.832654    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:17.832679    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:42:22.834423    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:22.834445    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:42:27.836568    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:27.836595    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:42:32.838764    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:32.838809    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:42:37.841019    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:37.841153    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:42:37.853364    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:42:37.853437    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:42:37.863874    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:42:37.863937    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:42:37.873929    4935 logs.go:276] 2 containers: [40f141ecd834 b755c418988f]
	I0728 18:42:37.873994    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:42:37.884761    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:42:37.884824    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:42:37.896455    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:42:37.896524    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:42:37.907253    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:42:37.907315    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:42:37.917325    4935 logs.go:276] 0 containers: []
	W0728 18:42:37.917341    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:42:37.917392    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:42:37.928392    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:42:37.928410    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:42:37.928415    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:42:37.964555    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:42:37.964569    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:42:37.978621    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:42:37.978631    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:42:37.992536    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:42:37.992545    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:42:38.010763    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:42:38.010776    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:42:38.022626    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:42:38.022641    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:42:38.058216    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:42:38.058225    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:42:38.063226    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:42:38.063235    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:42:38.078013    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:42:38.078027    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:42:38.089651    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:42:38.089661    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:42:38.103876    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:42:38.103887    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:42:38.115817    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:42:38.115831    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:42:38.127293    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:42:38.127303    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:42:40.654423    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:42:45.656814    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:45.656954    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:42:45.670205    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:42:45.670275    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:42:45.681425    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:42:45.681491    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:42:45.691486    4935 logs.go:276] 2 containers: [40f141ecd834 b755c418988f]
	I0728 18:42:45.691557    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:42:45.702001    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:42:45.702066    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:42:45.712622    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:42:45.712687    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:42:45.722866    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:42:45.722929    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:42:45.733108    4935 logs.go:276] 0 containers: []
	W0728 18:42:45.733119    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:42:45.733166    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:42:45.743133    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:42:45.743147    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:42:45.743152    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:42:45.754904    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:42:45.754918    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:42:45.766900    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:42:45.766912    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:42:45.778053    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:42:45.778066    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:42:45.818420    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:42:45.818435    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:42:45.823082    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:42:45.823092    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:42:45.837736    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:42:45.837750    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:42:45.852389    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:42:45.852401    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:42:45.864227    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:42:45.864240    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:42:45.879503    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:42:45.879514    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:42:45.896932    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:42:45.896945    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:42:45.908316    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:42:45.908329    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:42:45.941159    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:42:45.941166    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:42:48.465970    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:42:53.468788    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:42:53.469222    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:42:53.512298    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:42:53.512409    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:42:53.533837    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:42:53.533929    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:42:53.547454    4935 logs.go:276] 2 containers: [40f141ecd834 b755c418988f]
	I0728 18:42:53.547539    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:42:53.562953    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:42:53.563020    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:42:53.572896    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:42:53.572964    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:42:53.584683    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:42:53.584746    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:42:53.599488    4935 logs.go:276] 0 containers: []
	W0728 18:42:53.599499    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:42:53.599559    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:42:53.610155    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:42:53.610170    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:42:53.610176    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:42:53.621758    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:42:53.621784    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:42:53.657076    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:42:53.657086    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:42:53.661697    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:42:53.661706    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:42:53.675916    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:42:53.675927    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:42:53.689264    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:42:53.689273    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:42:53.700766    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:42:53.700780    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:42:53.712851    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:42:53.712863    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:42:53.730221    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:42:53.730230    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:42:53.756249    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:42:53.756256    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:42:53.789722    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:42:53.789733    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:42:53.802816    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:42:53.802828    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:42:53.820766    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:42:53.820776    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:42:56.335104    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:43:01.337854    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:43:01.338283    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:43:01.375734    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:43:01.375860    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:43:01.396824    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:43:01.396932    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:43:01.412350    4935 logs.go:276] 2 containers: [40f141ecd834 b755c418988f]
	I0728 18:43:01.412423    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:43:01.425589    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:43:01.425659    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:43:01.444504    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:43:01.444569    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:43:01.455591    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:43:01.455647    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:43:01.469452    4935 logs.go:276] 0 containers: []
	W0728 18:43:01.469464    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:43:01.469522    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:43:01.479765    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:43:01.479780    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:43:01.479786    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:43:01.517857    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:43:01.517867    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:43:01.532835    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:43:01.532847    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:43:01.548863    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:43:01.548878    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:43:01.577612    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:43:01.577632    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:43:01.590734    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:43:01.590748    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:43:01.627780    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:43:01.627805    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:43:01.633071    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:43:01.633082    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:43:01.649954    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:43:01.649968    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:43:01.663420    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:43:01.663434    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:43:01.682214    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:43:01.682230    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:43:01.694817    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:43:01.694829    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:43:01.709247    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:43:01.709260    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:43:04.223366    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:43:09.225533    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:43:09.225833    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:43:09.259796    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:43:09.259892    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:43:09.280090    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:43:09.280187    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:43:09.297234    4935 logs.go:276] 2 containers: [40f141ecd834 b755c418988f]
	I0728 18:43:09.297293    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:43:09.309627    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:43:09.309695    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:43:09.326462    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:43:09.326510    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:43:09.337513    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:43:09.337576    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:43:09.348086    4935 logs.go:276] 0 containers: []
	W0728 18:43:09.348095    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:43:09.348148    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:43:09.358977    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:43:09.358990    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:43:09.358996    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:43:09.376691    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:43:09.376700    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:43:09.388666    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:43:09.388675    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:43:09.423764    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:43:09.423772    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:43:09.428041    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:43:09.428050    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:43:09.442216    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:43:09.442228    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:43:09.457702    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:43:09.457718    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:43:09.469037    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:43:09.469046    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:43:09.493591    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:43:09.493602    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:43:09.505105    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:43:09.505116    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:43:09.560228    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:43:09.560237    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:43:09.572461    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:43:09.572473    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:43:09.583645    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:43:09.583656    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:43:12.103667    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:43:17.106490    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:43:17.106925    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:43:17.141580    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:43:17.141704    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:43:17.161617    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:43:17.161726    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:43:17.176502    4935 logs.go:276] 2 containers: [40f141ecd834 b755c418988f]
	I0728 18:43:17.176573    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:43:17.188878    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:43:17.188944    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:43:17.199774    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:43:17.199834    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:43:17.210253    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:43:17.210320    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:43:17.220766    4935 logs.go:276] 0 containers: []
	W0728 18:43:17.220777    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:43:17.220836    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:43:17.231185    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:43:17.231206    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:43:17.231212    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:43:17.291149    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:43:17.291161    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:43:17.303258    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:43:17.303271    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:43:17.315019    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:43:17.315032    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:43:17.326800    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:43:17.326813    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:43:17.338367    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:43:17.338379    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:43:17.363062    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:43:17.363070    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:43:17.374057    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:43:17.374071    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:43:17.409679    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:43:17.409686    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:43:17.414024    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:43:17.414033    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:43:17.428847    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:43:17.428861    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:43:17.442731    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:43:17.442741    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:43:17.460384    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:43:17.460396    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:43:19.979698    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:43:24.982094    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:43:24.982609    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:43:25.024581    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:43:25.024712    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:43:25.046867    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:43:25.046973    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:43:25.062539    4935 logs.go:276] 2 containers: [40f141ecd834 b755c418988f]
	I0728 18:43:25.062628    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:43:25.075012    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:43:25.075082    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:43:25.086802    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:43:25.086874    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:43:25.098855    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:43:25.098917    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:43:25.109173    4935 logs.go:276] 0 containers: []
	W0728 18:43:25.109182    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:43:25.109230    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:43:25.119833    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:43:25.119850    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:43:25.119855    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:43:25.137777    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:43:25.137789    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:43:25.142357    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:43:25.142366    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:43:25.176699    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:43:25.176712    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:43:25.191990    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:43:25.192002    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:43:25.206015    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:43:25.206027    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:43:25.224313    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:43:25.224326    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:43:25.239005    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:43:25.239018    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:43:25.251404    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:43:25.251414    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:43:25.274788    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:43:25.274796    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:43:25.307608    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:43:25.307616    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:43:25.319842    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:43:25.319853    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:43:25.331104    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:43:25.331114    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:43:27.844443    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:43:32.845852    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:43:32.846068    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:43:32.870618    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:43:32.870730    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:43:32.888391    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:43:32.888478    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:43:32.902345    4935 logs.go:276] 2 containers: [40f141ecd834 b755c418988f]
	I0728 18:43:32.902418    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:43:32.914184    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:43:32.914263    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:43:32.925389    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:43:32.925457    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:43:32.943247    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:43:32.943322    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:43:32.955953    4935 logs.go:276] 0 containers: []
	W0728 18:43:32.955964    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:43:32.956021    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:43:32.966276    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:43:32.966292    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:43:32.966300    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:43:32.979935    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:43:32.979948    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:43:32.991579    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:43:32.991593    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:43:33.003214    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:43:33.003226    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:43:33.037711    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:43:33.037721    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:43:33.042319    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:43:33.042327    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:43:33.056615    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:43:33.056625    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:43:33.070385    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:43:33.070398    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:43:33.083341    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:43:33.083353    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:43:33.106326    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:43:33.106335    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:43:33.139904    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:43:33.139918    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:43:33.155518    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:43:33.155530    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:43:33.172743    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:43:33.172753    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:43:35.687159    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:43:40.689595    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:43:40.689973    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:43:40.730068    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:43:40.730192    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:43:40.751644    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:43:40.751730    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:43:40.772407    4935 logs.go:276] 2 containers: [40f141ecd834 b755c418988f]
	I0728 18:43:40.772483    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:43:40.783909    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:43:40.783981    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:43:40.794287    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:43:40.794360    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:43:40.812143    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:43:40.812219    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:43:40.822880    4935 logs.go:276] 0 containers: []
	W0728 18:43:40.822893    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:43:40.822946    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:43:40.839415    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:43:40.839431    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:43:40.839436    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:43:40.858316    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:43:40.858326    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:43:40.870549    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:43:40.870561    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:43:40.882967    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:43:40.882979    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:43:40.887290    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:43:40.887299    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:43:40.957344    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:43:40.957354    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:43:40.972677    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:43:40.972687    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:43:40.986853    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:43:40.986865    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:43:40.999347    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:43:40.999360    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:43:41.011252    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:43:41.011264    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:43:41.029696    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:43:41.029708    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:43:41.064677    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:43:41.064686    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:43:41.075966    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:43:41.075979    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:43:43.600210    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:43:48.602988    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:43:48.603383    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:43:48.641655    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:43:48.641790    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:43:48.662817    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:43:48.662904    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:43:48.677304    4935 logs.go:276] 2 containers: [40f141ecd834 b755c418988f]
	I0728 18:43:48.677373    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:43:48.689857    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:43:48.689916    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:43:48.701672    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:43:48.701728    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:43:48.712680    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:43:48.712746    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:43:48.723508    4935 logs.go:276] 0 containers: []
	W0728 18:43:48.723518    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:43:48.723563    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:43:48.734412    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:43:48.734426    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:43:48.734431    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:43:48.748776    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:43:48.748790    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:43:48.764056    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:43:48.764070    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:43:48.775472    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:43:48.775485    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:43:48.787286    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:43:48.787298    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:43:48.807073    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:43:48.807084    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:43:48.818970    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:43:48.818983    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:43:48.836881    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:43:48.836891    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:43:48.848555    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:43:48.848568    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:43:48.882221    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:43:48.882228    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:43:48.886164    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:43:48.886173    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:43:48.922692    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:43:48.922703    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:43:48.947142    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:43:48.947148    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:43:51.460099    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:43:56.462831    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:43:56.463233    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:43:56.498060    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:43:56.498192    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:43:56.519890    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:43:56.519996    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:43:56.535058    4935 logs.go:276] 4 containers: [01b229b874fd 4e775816e462 40f141ecd834 b755c418988f]
	I0728 18:43:56.535126    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:43:56.547403    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:43:56.547469    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:43:56.558770    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:43:56.558834    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:43:56.579316    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:43:56.579381    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:43:56.589474    4935 logs.go:276] 0 containers: []
	W0728 18:43:56.589487    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:43:56.589534    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:43:56.599921    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:43:56.599938    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:43:56.599943    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:43:56.617476    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:43:56.617487    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:43:56.630691    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:43:56.630701    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:43:56.642871    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:43:56.642881    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:43:56.660840    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:43:56.660852    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:43:56.673066    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:43:56.673078    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:43:56.685265    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:43:56.685278    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:43:56.710495    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:43:56.710505    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:43:56.745784    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:43:56.745792    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:43:56.750155    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:43:56.750164    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:43:56.785643    4935 logs.go:123] Gathering logs for coredns [01b229b874fd] ...
	I0728 18:43:56.785653    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01b229b874fd"
	I0728 18:43:56.797491    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:43:56.797501    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:43:56.815392    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:43:56.815401    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:43:56.829246    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:43:56.829256    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:43:56.843541    4935 logs.go:123] Gathering logs for coredns [4e775816e462] ...
	I0728 18:43:56.843552    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e775816e462"
	I0728 18:43:59.357345    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:44:04.357964    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:44:04.358053    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:44:04.370034    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:44:04.370112    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:44:04.381737    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:44:04.381812    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:44:04.393843    4935 logs.go:276] 4 containers: [01b229b874fd 4e775816e462 40f141ecd834 b755c418988f]
	I0728 18:44:04.393922    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:44:04.409547    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:44:04.409614    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:44:04.421443    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:44:04.421500    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:44:04.433652    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:44:04.433701    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:44:04.444430    4935 logs.go:276] 0 containers: []
	W0728 18:44:04.444439    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:44:04.444486    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:44:04.455769    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:44:04.455785    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:44:04.455790    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:44:04.492817    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:44:04.492828    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:44:04.531905    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:44:04.531918    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:44:04.544219    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:44:04.544229    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:44:04.563135    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:44:04.563144    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:44:04.567550    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:44:04.567557    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:44:04.582800    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:44:04.582811    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:44:04.597946    4935 logs.go:123] Gathering logs for coredns [01b229b874fd] ...
	I0728 18:44:04.597961    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01b229b874fd"
	I0728 18:44:04.609719    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:44:04.609729    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:44:04.622529    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:44:04.622540    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:44:04.646751    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:44:04.646783    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:44:04.659662    4935 logs.go:123] Gathering logs for coredns [4e775816e462] ...
	I0728 18:44:04.659678    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e775816e462"
	I0728 18:44:04.673246    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:44:04.673258    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:44:04.692284    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:44:04.692294    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:44:04.708078    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:44:04.708086    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:44:07.238420    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:44:12.241196    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:44:12.241689    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:44:12.280304    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:44:12.280417    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:44:12.302580    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:44:12.302659    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:44:12.314890    4935 logs.go:276] 4 containers: [01b229b874fd 4e775816e462 40f141ecd834 b755c418988f]
	I0728 18:44:12.314959    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:44:12.325484    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:44:12.325550    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:44:12.335766    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:44:12.335837    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:44:12.346349    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:44:12.346419    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:44:12.356711    4935 logs.go:276] 0 containers: []
	W0728 18:44:12.356725    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:44:12.356781    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:44:12.367632    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:44:12.367651    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:44:12.367656    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:44:12.372477    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:44:12.372483    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:44:12.407751    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:44:12.407765    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:44:12.421805    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:44:12.421818    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:44:12.434009    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:44:12.434022    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:44:12.445385    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:44:12.445399    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:44:12.479420    4935 logs.go:123] Gathering logs for coredns [4e775816e462] ...
	I0728 18:44:12.479430    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e775816e462"
	I0728 18:44:12.491093    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:44:12.491107    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:44:12.505322    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:44:12.505334    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:44:12.519333    4935 logs.go:123] Gathering logs for coredns [01b229b874fd] ...
	I0728 18:44:12.519346    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01b229b874fd"
	I0728 18:44:12.531236    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:44:12.531247    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:44:12.545383    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:44:12.545395    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:44:12.557121    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:44:12.557134    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:44:12.575470    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:44:12.575481    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:44:12.600462    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:44:12.600470    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:44:15.117325    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:44:20.120082    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:44:20.120518    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:44:20.169201    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:44:20.169329    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:44:20.188472    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:44:20.188549    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:44:20.202519    4935 logs.go:276] 4 containers: [01b229b874fd 4e775816e462 40f141ecd834 b755c418988f]
	I0728 18:44:20.202598    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:44:20.214858    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:44:20.214923    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:44:20.225551    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:44:20.225622    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:44:20.236571    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:44:20.236631    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:44:20.246816    4935 logs.go:276] 0 containers: []
	W0728 18:44:20.246828    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:44:20.246875    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:44:20.257509    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:44:20.257526    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:44:20.257531    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:44:20.261983    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:44:20.261993    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:44:20.277500    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:44:20.277511    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:44:20.289646    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:44:20.289660    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:44:20.311724    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:44:20.311735    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:44:20.323942    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:44:20.323952    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:44:20.338360    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:44:20.338373    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:44:20.363895    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:44:20.363915    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:44:20.376719    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:44:20.376732    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:44:20.420008    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:44:20.420019    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:44:20.434747    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:44:20.434758    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:44:20.446903    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:44:20.446916    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:44:20.460949    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:44:20.460962    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:44:20.495945    4935 logs.go:123] Gathering logs for coredns [01b229b874fd] ...
	I0728 18:44:20.495952    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01b229b874fd"
	I0728 18:44:20.507518    4935 logs.go:123] Gathering logs for coredns [4e775816e462] ...
	I0728 18:44:20.507531    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e775816e462"
	I0728 18:44:23.020770    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:44:28.023443    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:44:28.023521    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:44:28.035287    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:44:28.035373    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:44:28.046887    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:44:28.046953    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:44:28.059655    4935 logs.go:276] 4 containers: [01b229b874fd 4e775816e462 40f141ecd834 b755c418988f]
	I0728 18:44:28.059738    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:44:28.072622    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:44:28.072690    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:44:28.084785    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:44:28.084839    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:44:28.095706    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:44:28.095782    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:44:28.113943    4935 logs.go:276] 0 containers: []
	W0728 18:44:28.113958    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:44:28.114017    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:44:28.125081    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:44:28.125101    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:44:28.125107    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:44:28.129750    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:44:28.129759    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:44:28.142643    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:44:28.142654    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:44:28.155644    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:44:28.155656    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:44:28.169042    4935 logs.go:123] Gathering logs for coredns [01b229b874fd] ...
	I0728 18:44:28.169053    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01b229b874fd"
	I0728 18:44:28.181828    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:44:28.181841    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:44:28.194340    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:44:28.194356    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:44:28.206622    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:44:28.206634    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:44:28.232172    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:44:28.232191    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:44:28.269681    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:44:28.269696    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:44:28.309808    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:44:28.309819    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:44:28.325010    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:44:28.325020    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:44:28.340779    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:44:28.340790    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:44:28.356008    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:44:28.356019    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:44:28.378816    4935 logs.go:123] Gathering logs for coredns [4e775816e462] ...
	I0728 18:44:28.378837    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e775816e462"
	I0728 18:44:30.902706    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:44:35.905130    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:44:35.905537    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:44:35.940594    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:44:35.940721    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:44:35.965195    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:44:35.965303    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:44:35.980061    4935 logs.go:276] 4 containers: [01b229b874fd 4e775816e462 40f141ecd834 b755c418988f]
	I0728 18:44:35.980143    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:44:35.994374    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:44:35.994440    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:44:36.005395    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:44:36.005460    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:44:36.016436    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:44:36.016501    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:44:36.026893    4935 logs.go:276] 0 containers: []
	W0728 18:44:36.026903    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:44:36.026954    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:44:36.037425    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:44:36.037443    4935 logs.go:123] Gathering logs for coredns [4e775816e462] ...
	I0728 18:44:36.037448    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e775816e462"
	I0728 18:44:36.049371    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:44:36.049383    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:44:36.063373    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:44:36.063386    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:44:36.086838    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:44:36.086845    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:44:36.100675    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:44:36.100685    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:44:36.118489    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:44:36.118501    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:44:36.131107    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:44:36.131118    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:44:36.143885    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:44:36.143896    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:44:36.156936    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:44:36.156948    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:44:36.192540    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:44:36.192548    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:44:36.197158    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:44:36.197166    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:44:36.231586    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:44:36.231598    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:44:36.251242    4935 logs.go:123] Gathering logs for coredns [01b229b874fd] ...
	I0728 18:44:36.251251    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01b229b874fd"
	I0728 18:44:36.263612    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:44:36.263625    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:44:36.286439    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:44:36.286450    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:44:38.805754    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:44:43.808207    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:44:43.808562    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:44:43.839328    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:44:43.839447    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:44:43.857144    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:44:43.857240    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:44:43.870971    4935 logs.go:276] 4 containers: [01b229b874fd 4e775816e462 40f141ecd834 b755c418988f]
	I0728 18:44:43.871041    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:44:43.883351    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:44:43.883408    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:44:43.896200    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:44:43.896269    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:44:43.906922    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:44:43.906981    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:44:43.917751    4935 logs.go:276] 0 containers: []
	W0728 18:44:43.917762    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:44:43.917814    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:44:43.928772    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:44:43.928792    4935 logs.go:123] Gathering logs for coredns [4e775816e462] ...
	I0728 18:44:43.928821    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e775816e462"
	I0728 18:44:43.940699    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:44:43.940714    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:44:43.952796    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:44:43.952812    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:44:43.972763    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:44:43.972773    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:44:43.977242    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:44:43.977249    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:44:44.011428    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:44:44.011443    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:44:44.027353    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:44:44.027367    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:44:44.039279    4935 logs.go:123] Gathering logs for coredns [01b229b874fd] ...
	I0728 18:44:44.039293    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01b229b874fd"
	I0728 18:44:44.051078    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:44:44.051094    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:44:44.065279    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:44:44.065290    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:44:44.077188    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:44:44.077202    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:44:44.088549    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:44:44.088561    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:44:44.122609    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:44:44.122618    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:44:44.136131    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:44:44.136143    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:44:44.147683    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:44:44.147694    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:44:46.675728    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:44:51.678520    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:44:51.678873    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:44:51.708993    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:44:51.709124    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:44:51.726894    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:44:51.726969    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:44:51.743944    4935 logs.go:276] 4 containers: [01b229b874fd 4e775816e462 40f141ecd834 b755c418988f]
	I0728 18:44:51.744015    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:44:51.756811    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:44:51.756889    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:44:51.767672    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:44:51.767735    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:44:51.782416    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:44:51.782484    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:44:51.792146    4935 logs.go:276] 0 containers: []
	W0728 18:44:51.792158    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:44:51.792212    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:44:51.802884    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:44:51.802903    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:44:51.802908    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:44:51.816706    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:44:51.816716    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:44:51.850266    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:44:51.850274    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:44:51.854358    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:44:51.854366    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:44:51.868226    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:44:51.868236    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:44:51.880236    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:44:51.880247    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:44:51.894279    4935 logs.go:123] Gathering logs for coredns [4e775816e462] ...
	I0728 18:44:51.894289    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e775816e462"
	I0728 18:44:51.905735    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:44:51.905747    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:44:51.923192    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:44:51.923201    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:44:51.935086    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:44:51.935094    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:44:51.946418    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:44:51.946430    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:44:51.984505    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:44:51.984516    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:44:51.999247    4935 logs.go:123] Gathering logs for coredns [01b229b874fd] ...
	I0728 18:44:51.999261    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01b229b874fd"
	I0728 18:44:52.012144    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:44:52.012161    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:44:52.025103    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:44:52.025115    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:44:54.551253    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:44:59.553477    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:44:59.553879    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:44:59.588405    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:44:59.588535    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:44:59.609070    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:44:59.609175    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:44:59.626956    4935 logs.go:276] 4 containers: [01b229b874fd 4e775816e462 40f141ecd834 b755c418988f]
	I0728 18:44:59.627038    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:44:59.638776    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:44:59.638836    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:44:59.649354    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:44:59.649418    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:44:59.664306    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:44:59.664378    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:44:59.675101    4935 logs.go:276] 0 containers: []
	W0728 18:44:59.675112    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:44:59.675166    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:44:59.685529    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:44:59.685545    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:44:59.685550    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:44:59.720985    4935 logs.go:123] Gathering logs for coredns [01b229b874fd] ...
	I0728 18:44:59.721001    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01b229b874fd"
	I0728 18:44:59.737259    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:44:59.737271    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:44:59.749271    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:44:59.749284    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:44:59.761078    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:44:59.761089    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:44:59.765122    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:44:59.765129    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:44:59.784381    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:44:59.784394    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:44:59.798817    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:44:59.798831    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:44:59.813779    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:44:59.813790    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:44:59.825084    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:44:59.825094    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:44:59.848977    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:44:59.848986    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:44:59.884729    4935 logs.go:123] Gathering logs for coredns [4e775816e462] ...
	I0728 18:44:59.884738    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e775816e462"
	I0728 18:44:59.896465    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:44:59.896478    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:44:59.909863    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:44:59.909875    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:44:59.926138    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:44:59.926148    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:45:02.446178    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:45:07.448971    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:45:07.449446    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:45:07.489915    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:45:07.490049    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:45:07.513555    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:45:07.513660    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:45:07.530304    4935 logs.go:276] 4 containers: [01b229b874fd 4e775816e462 40f141ecd834 b755c418988f]
	I0728 18:45:07.530382    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:45:07.542548    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:45:07.542613    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:45:07.553286    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:45:07.553354    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:45:07.563924    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:45:07.563984    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:45:07.574387    4935 logs.go:276] 0 containers: []
	W0728 18:45:07.574399    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:45:07.574457    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:45:07.584973    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:45:07.584989    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:45:07.584996    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:45:07.589276    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:45:07.589285    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:45:07.604803    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:45:07.604815    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:45:07.629232    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:45:07.629240    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:45:07.645920    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:45:07.645934    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:45:07.658285    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:45:07.658297    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:45:07.672305    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:45:07.672315    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:45:07.696184    4935 logs.go:123] Gathering logs for coredns [4e775816e462] ...
	I0728 18:45:07.696194    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e775816e462"
	I0728 18:45:07.708283    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:45:07.708295    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:45:07.720065    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:45:07.720074    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:45:07.741294    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:45:07.741308    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:45:07.752450    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:45:07.752460    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:45:07.764012    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:45:07.764022    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:45:07.799345    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:45:07.799354    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:45:07.841223    4935 logs.go:123] Gathering logs for coredns [01b229b874fd] ...
	I0728 18:45:07.841234    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01b229b874fd"
	I0728 18:45:10.353173    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:45:15.355730    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:45:15.355918    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:45:15.368351    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:45:15.368420    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:45:15.379709    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:45:15.379777    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:45:15.390797    4935 logs.go:276] 4 containers: [01b229b874fd 4e775816e462 40f141ecd834 b755c418988f]
	I0728 18:45:15.390865    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:45:15.401929    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:45:15.401998    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:45:15.412537    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:45:15.412596    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:45:15.423765    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:45:15.423838    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:45:15.434081    4935 logs.go:276] 0 containers: []
	W0728 18:45:15.434092    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:45:15.434146    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:45:15.444344    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:45:15.444361    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:45:15.444366    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:45:15.448760    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:45:15.448769    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:45:15.482902    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:45:15.482914    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:45:15.500098    4935 logs.go:123] Gathering logs for coredns [4e775816e462] ...
	I0728 18:45:15.500108    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e775816e462"
	I0728 18:45:15.511861    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:45:15.511871    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:45:15.525561    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:45:15.525573    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:45:15.537085    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:45:15.537100    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:45:15.550054    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:45:15.550064    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:45:15.584850    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:45:15.584857    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:45:15.602114    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:45:15.602127    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:45:15.626399    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:45:15.626408    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:45:15.650733    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:45:15.650746    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:45:15.663424    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:45:15.663436    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:45:15.679904    4935 logs.go:123] Gathering logs for coredns [01b229b874fd] ...
	I0728 18:45:15.679917    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01b229b874fd"
	I0728 18:45:15.691397    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:45:15.691410    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:45:18.205172    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:45:23.207394    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:45:23.207760    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:45:23.248004    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:45:23.248135    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:45:23.269640    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:45:23.269761    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:45:23.285054    4935 logs.go:276] 4 containers: [01b229b874fd 4e775816e462 40f141ecd834 b755c418988f]
	I0728 18:45:23.285136    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:45:23.297224    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:45:23.297292    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:45:23.308624    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:45:23.308690    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:45:23.319115    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:45:23.319179    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:45:23.329499    4935 logs.go:276] 0 containers: []
	W0728 18:45:23.329511    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:45:23.329575    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:45:23.339711    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:45:23.339737    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:45:23.339742    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:45:23.358152    4935 logs.go:123] Gathering logs for coredns [01b229b874fd] ...
	I0728 18:45:23.358166    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01b229b874fd"
	I0728 18:45:23.370414    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:45:23.370428    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:45:23.393962    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:45:23.393974    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:45:23.405813    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:45:23.405826    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:45:23.409937    4935 logs.go:123] Gathering logs for coredns [4e775816e462] ...
	I0728 18:45:23.409946    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e775816e462"
	I0728 18:45:23.421470    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:45:23.421483    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:45:23.432956    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:45:23.432965    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:45:23.444734    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:45:23.444748    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:45:23.479125    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:45:23.479140    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:45:23.493447    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:45:23.493456    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:45:23.505102    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:45:23.505112    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:45:23.518988    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:45:23.519002    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:45:23.555562    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:45:23.555579    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:45:23.570129    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:45:23.570140    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:45:26.090017    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:45:31.092448    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:45:31.092895    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 18:45:31.135219    4935 logs.go:276] 1 containers: [f1ecfa8e0f0d]
	I0728 18:45:31.135337    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 18:45:31.161794    4935 logs.go:276] 1 containers: [0942fdcec6cc]
	I0728 18:45:31.161895    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 18:45:31.176349    4935 logs.go:276] 4 containers: [01b229b874fd 4e775816e462 40f141ecd834 b755c418988f]
	I0728 18:45:31.176424    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 18:45:31.190295    4935 logs.go:276] 1 containers: [0c4dfc0a7f58]
	I0728 18:45:31.190366    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 18:45:31.200774    4935 logs.go:276] 1 containers: [4e52e38eac4b]
	I0728 18:45:31.200831    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 18:45:31.211316    4935 logs.go:276] 1 containers: [0417fc49a33a]
	I0728 18:45:31.211374    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0728 18:45:31.221787    4935 logs.go:276] 0 containers: []
	W0728 18:45:31.221799    4935 logs.go:278] No container was found matching "kindnet"
	I0728 18:45:31.221857    4935 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 18:45:31.232837    4935 logs.go:276] 1 containers: [31405e31559f]
	I0728 18:45:31.232856    4935 logs.go:123] Gathering logs for storage-provisioner [31405e31559f] ...
	I0728 18:45:31.232861    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31405e31559f"
	I0728 18:45:31.244305    4935 logs.go:123] Gathering logs for etcd [0942fdcec6cc] ...
	I0728 18:45:31.244317    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0942fdcec6cc"
	I0728 18:45:31.258542    4935 logs.go:123] Gathering logs for coredns [b755c418988f] ...
	I0728 18:45:31.258553    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b755c418988f"
	I0728 18:45:31.270247    4935 logs.go:123] Gathering logs for describe nodes ...
	I0728 18:45:31.270261    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0728 18:45:31.304515    4935 logs.go:123] Gathering logs for kube-apiserver [f1ecfa8e0f0d] ...
	I0728 18:45:31.304525    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ecfa8e0f0d"
	I0728 18:45:31.321965    4935 logs.go:123] Gathering logs for coredns [01b229b874fd] ...
	I0728 18:45:31.321973    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01b229b874fd"
	I0728 18:45:31.333342    4935 logs.go:123] Gathering logs for kube-scheduler [0c4dfc0a7f58] ...
	I0728 18:45:31.333351    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c4dfc0a7f58"
	I0728 18:45:31.347735    4935 logs.go:123] Gathering logs for kubelet ...
	I0728 18:45:31.347748    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 18:45:31.383161    4935 logs.go:123] Gathering logs for dmesg ...
	I0728 18:45:31.383170    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 18:45:31.387746    4935 logs.go:123] Gathering logs for kube-controller-manager [0417fc49a33a] ...
	I0728 18:45:31.387755    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0417fc49a33a"
	I0728 18:45:31.404950    4935 logs.go:123] Gathering logs for coredns [40f141ecd834] ...
	I0728 18:45:31.404960    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40f141ecd834"
	I0728 18:45:31.416611    4935 logs.go:123] Gathering logs for kube-proxy [4e52e38eac4b] ...
	I0728 18:45:31.416622    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e52e38eac4b"
	I0728 18:45:31.428051    4935 logs.go:123] Gathering logs for container status ...
	I0728 18:45:31.428063    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 18:45:31.440886    4935 logs.go:123] Gathering logs for coredns [4e775816e462] ...
	I0728 18:45:31.440900    4935 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e775816e462"
	I0728 18:45:31.452345    4935 logs.go:123] Gathering logs for Docker ...
	I0728 18:45:31.452358    4935 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0728 18:45:33.979033    4935 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0728 18:45:38.980843    4935 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0728 18:45:38.989397    4935 out.go:177] 
	W0728 18:45:38.994335    4935 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0728 18:45:38.994359    4935 out.go:239] * 
	* 
	W0728 18:45:38.996312    4935 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:45:39.003244    4935 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-278000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (562.81s)

                                                
                                    
x
+
TestPause/serial/Start (9.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-146000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-146000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.821619625s)

                                                
                                                
-- stdout --
	* [pause-146000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-146000" primary control-plane node in "pause-146000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-146000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-146000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-146000 -n pause-146000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-146000 -n pause-146000: exit status 7 (31.580916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-146000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-664000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-664000 --driver=qemu2 : exit status 80 (9.935771667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-664000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-664000" primary control-plane node in "NoKubernetes-664000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-664000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-664000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-664000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-664000 -n NoKubernetes-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-664000 -n NoKubernetes-664000: exit status 7 (58.037959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-664000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-664000 --no-kubernetes --driver=qemu2 : exit status 80 (5.247925333s)

                                                
                                                
-- stdout --
	* [NoKubernetes-664000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-664000
	* Restarting existing qemu2 VM for "NoKubernetes-664000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-664000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-664000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-664000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-664000 -n NoKubernetes-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-664000 -n NoKubernetes-664000: exit status 7 (54.593542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-664000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-664000 --no-kubernetes --driver=qemu2 : exit status 80 (5.248645958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-664000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-664000
	* Restarting existing qemu2 VM for "NoKubernetes-664000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-664000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-664000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-664000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-664000 -n NoKubernetes-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-664000 -n NoKubernetes-664000: exit status 7 (65.343625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-664000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-664000 --driver=qemu2 : exit status 80 (5.275005708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-664000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-664000
	* Restarting existing qemu2 VM for "NoKubernetes-664000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-664000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-664000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-664000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-664000 -n NoKubernetes-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-664000 -n NoKubernetes-664000: exit status 7 (66.560208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.762165708s)

                                                
                                                
-- stdout --
	* [auto-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-496000" primary control-plane node in "auto-496000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-496000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:43:53.642440    5131 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:43:53.642574    5131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:43:53.642577    5131 out.go:304] Setting ErrFile to fd 2...
	I0728 18:43:53.642580    5131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:43:53.642703    5131 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:43:53.643857    5131 out.go:298] Setting JSON to false
	I0728 18:43:53.660142    5131 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4404,"bootTime":1722213029,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:43:53.660207    5131 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:43:53.666543    5131 out.go:177] * [auto-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:43:53.674586    5131 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:43:53.674674    5131 notify.go:220] Checking for updates...
	I0728 18:43:53.682536    5131 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:43:53.685477    5131 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:43:53.688549    5131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:43:53.691545    5131 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:43:53.694455    5131 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:43:53.697790    5131 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:43:53.697861    5131 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:43:53.697905    5131 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:43:53.702597    5131 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:43:53.709502    5131 start.go:297] selected driver: qemu2
	I0728 18:43:53.709508    5131 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:43:53.709514    5131 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:43:53.711730    5131 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:43:53.714486    5131 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:43:53.717521    5131 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:43:53.717535    5131 cni.go:84] Creating CNI manager for ""
	I0728 18:43:53.717542    5131 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:43:53.717546    5131 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 18:43:53.717571    5131 start.go:340] cluster config:
	{Name:auto-496000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:43:53.721141    5131 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:43:53.728387    5131 out.go:177] * Starting "auto-496000" primary control-plane node in "auto-496000" cluster
	I0728 18:43:53.732508    5131 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:43:53.732522    5131 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:43:53.732530    5131 cache.go:56] Caching tarball of preloaded images
	I0728 18:43:53.732590    5131 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:43:53.732595    5131 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:43:53.732652    5131 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/auto-496000/config.json ...
	I0728 18:43:53.732663    5131 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/auto-496000/config.json: {Name:mk2aa465d67ff5dea24aac5715f8328cbf7e3ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:43:53.733036    5131 start.go:360] acquireMachinesLock for auto-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:43:53.733070    5131 start.go:364] duration metric: took 28.208µs to acquireMachinesLock for "auto-496000"
	I0728 18:43:53.733082    5131 start.go:93] Provisioning new machine with config: &{Name:auto-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:43:53.733106    5131 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:43:53.735035    5131 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:43:53.751061    5131 start.go:159] libmachine.API.Create for "auto-496000" (driver="qemu2")
	I0728 18:43:53.751093    5131 client.go:168] LocalClient.Create starting
	I0728 18:43:53.751151    5131 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:43:53.751185    5131 main.go:141] libmachine: Decoding PEM data...
	I0728 18:43:53.751193    5131 main.go:141] libmachine: Parsing certificate...
	I0728 18:43:53.751236    5131 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:43:53.751260    5131 main.go:141] libmachine: Decoding PEM data...
	I0728 18:43:53.751267    5131 main.go:141] libmachine: Parsing certificate...
	I0728 18:43:53.751670    5131 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:43:53.901408    5131 main.go:141] libmachine: Creating SSH key...
	I0728 18:43:53.940221    5131 main.go:141] libmachine: Creating Disk image...
	I0728 18:43:53.940226    5131 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:43:53.940435    5131 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/disk.qcow2
	I0728 18:43:53.949603    5131 main.go:141] libmachine: STDOUT: 
	I0728 18:43:53.949626    5131 main.go:141] libmachine: STDERR: 
	I0728 18:43:53.949677    5131 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/disk.qcow2 +20000M
	I0728 18:43:53.957986    5131 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:43:53.958000    5131 main.go:141] libmachine: STDERR: 
	I0728 18:43:53.958026    5131 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/disk.qcow2
	I0728 18:43:53.958030    5131 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:43:53.958042    5131 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:43:53.958067    5131 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:ba:8c:06:f0:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/disk.qcow2
	I0728 18:43:53.959657    5131 main.go:141] libmachine: STDOUT: 
	I0728 18:43:53.959670    5131 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:43:53.959691    5131 client.go:171] duration metric: took 208.594625ms to LocalClient.Create
	I0728 18:43:55.961851    5131 start.go:128] duration metric: took 2.228739666s to createHost
	I0728 18:43:55.961917    5131 start.go:83] releasing machines lock for "auto-496000", held for 2.228863083s
	W0728 18:43:55.961990    5131 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:43:55.974947    5131 out.go:177] * Deleting "auto-496000" in qemu2 ...
	W0728 18:43:55.993537    5131 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:43:55.993557    5131 start.go:729] Will try again in 5 seconds ...
	I0728 18:44:00.994879    5131 start.go:360] acquireMachinesLock for auto-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:44:00.995482    5131 start.go:364] duration metric: took 458.167µs to acquireMachinesLock for "auto-496000"
	I0728 18:44:00.995563    5131 start.go:93] Provisioning new machine with config: &{Name:auto-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:44:00.995843    5131 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:44:01.005536    5131 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:44:01.056591    5131 start.go:159] libmachine.API.Create for "auto-496000" (driver="qemu2")
	I0728 18:44:01.056645    5131 client.go:168] LocalClient.Create starting
	I0728 18:44:01.056764    5131 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:44:01.056830    5131 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:01.056872    5131 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:01.056935    5131 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:44:01.056980    5131 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:01.056994    5131 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:01.057544    5131 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:44:01.218471    5131 main.go:141] libmachine: Creating SSH key...
	I0728 18:44:01.314447    5131 main.go:141] libmachine: Creating Disk image...
	I0728 18:44:01.314454    5131 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:44:01.314666    5131 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/disk.qcow2
	I0728 18:44:01.324632    5131 main.go:141] libmachine: STDOUT: 
	I0728 18:44:01.324736    5131 main.go:141] libmachine: STDERR: 
	I0728 18:44:01.324787    5131 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/disk.qcow2 +20000M
	I0728 18:44:01.332930    5131 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:44:01.332991    5131 main.go:141] libmachine: STDERR: 
	I0728 18:44:01.333003    5131 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/disk.qcow2
	I0728 18:44:01.333008    5131 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:44:01.333017    5131 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:44:01.333041    5131 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:32:83:c9:82:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/auto-496000/disk.qcow2
	I0728 18:44:01.334620    5131 main.go:141] libmachine: STDOUT: 
	I0728 18:44:01.334711    5131 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:44:01.334724    5131 client.go:171] duration metric: took 278.076041ms to LocalClient.Create
	I0728 18:44:03.336905    5131 start.go:128] duration metric: took 2.341052333s to createHost
	I0728 18:44:03.336984    5131 start.go:83] releasing machines lock for "auto-496000", held for 2.341502125s
	W0728 18:44:03.337394    5131 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:44:03.346995    5131 out.go:177] 
	W0728 18:44:03.354011    5131 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:44:03.354033    5131 out.go:239] * 
	* 
	W0728 18:44:03.356042    5131 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:44:03.365036    5131 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.695571667s)

                                                
                                                
-- stdout --
	* [kindnet-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-496000" primary control-plane node in "kindnet-496000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-496000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:44:05.516865    5241 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:44:05.517015    5241 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:44:05.517019    5241 out.go:304] Setting ErrFile to fd 2...
	I0728 18:44:05.517021    5241 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:44:05.517169    5241 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:44:05.518378    5241 out.go:298] Setting JSON to false
	I0728 18:44:05.535423    5241 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4416,"bootTime":1722213029,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:44:05.535488    5241 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:44:05.542067    5241 out.go:177] * [kindnet-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:44:05.549950    5241 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:44:05.550016    5241 notify.go:220] Checking for updates...
	I0728 18:44:05.557078    5241 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:44:05.558655    5241 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:44:05.562066    5241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:44:05.565063    5241 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:44:05.568057    5241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:44:05.571499    5241 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:44:05.571566    5241 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:44:05.571619    5241 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:44:05.576050    5241 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:44:05.583064    5241 start.go:297] selected driver: qemu2
	I0728 18:44:05.583078    5241 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:44:05.583084    5241 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:44:05.585314    5241 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:44:05.588021    5241 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:44:05.591133    5241 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:44:05.591163    5241 cni.go:84] Creating CNI manager for "kindnet"
	I0728 18:44:05.591167    5241 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0728 18:44:05.591197    5241 start.go:340] cluster config:
	{Name:kindnet-496000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:44:05.594642    5241 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:44:05.602075    5241 out.go:177] * Starting "kindnet-496000" primary control-plane node in "kindnet-496000" cluster
	I0728 18:44:05.605003    5241 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:44:05.605015    5241 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:44:05.605022    5241 cache.go:56] Caching tarball of preloaded images
	I0728 18:44:05.605073    5241 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:44:05.605077    5241 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:44:05.605126    5241 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/kindnet-496000/config.json ...
	I0728 18:44:05.605136    5241 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/kindnet-496000/config.json: {Name:mka146e9a0440968532131e9acabc76b3867e26d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:44:05.605344    5241 start.go:360] acquireMachinesLock for kindnet-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:44:05.605373    5241 start.go:364] duration metric: took 24.167µs to acquireMachinesLock for "kindnet-496000"
	I0728 18:44:05.605384    5241 start.go:93] Provisioning new machine with config: &{Name:kindnet-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:44:05.605415    5241 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:44:05.612913    5241 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:44:05.628056    5241 start.go:159] libmachine.API.Create for "kindnet-496000" (driver="qemu2")
	I0728 18:44:05.628079    5241 client.go:168] LocalClient.Create starting
	I0728 18:44:05.628147    5241 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:44:05.628176    5241 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:05.628184    5241 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:05.628225    5241 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:44:05.628247    5241 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:05.628254    5241 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:05.628747    5241 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:44:05.773240    5241 main.go:141] libmachine: Creating SSH key...
	I0728 18:44:05.816339    5241 main.go:141] libmachine: Creating Disk image...
	I0728 18:44:05.816346    5241 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:44:05.816563    5241 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/disk.qcow2
	I0728 18:44:05.825757    5241 main.go:141] libmachine: STDOUT: 
	I0728 18:44:05.825776    5241 main.go:141] libmachine: STDERR: 
	I0728 18:44:05.825815    5241 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/disk.qcow2 +20000M
	I0728 18:44:05.833668    5241 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:44:05.833684    5241 main.go:141] libmachine: STDERR: 
	I0728 18:44:05.833703    5241 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/disk.qcow2
	I0728 18:44:05.833708    5241 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:44:05.833722    5241 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:44:05.833752    5241 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:ba:a4:b2:2c:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/disk.qcow2
	I0728 18:44:05.835315    5241 main.go:141] libmachine: STDOUT: 
	I0728 18:44:05.835328    5241 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:44:05.835347    5241 client.go:171] duration metric: took 207.266416ms to LocalClient.Create
	I0728 18:44:07.837434    5241 start.go:128] duration metric: took 2.232029166s to createHost
	I0728 18:44:07.837479    5241 start.go:83] releasing machines lock for "kindnet-496000", held for 2.232116666s
	W0728 18:44:07.837527    5241 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:44:07.845611    5241 out.go:177] * Deleting "kindnet-496000" in qemu2 ...
	W0728 18:44:07.867466    5241 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:44:07.867484    5241 start.go:729] Will try again in 5 seconds ...
	I0728 18:44:12.869592    5241 start.go:360] acquireMachinesLock for kindnet-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:44:12.869761    5241 start.go:364] duration metric: took 133.75µs to acquireMachinesLock for "kindnet-496000"
	I0728 18:44:12.869778    5241 start.go:93] Provisioning new machine with config: &{Name:kindnet-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:44:12.869813    5241 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:44:12.877695    5241 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:44:12.895271    5241 start.go:159] libmachine.API.Create for "kindnet-496000" (driver="qemu2")
	I0728 18:44:12.895316    5241 client.go:168] LocalClient.Create starting
	I0728 18:44:12.895380    5241 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:44:12.895422    5241 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:12.895432    5241 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:12.895466    5241 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:44:12.895491    5241 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:12.895501    5241 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:12.895773    5241 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:44:13.046621    5241 main.go:141] libmachine: Creating SSH key...
	I0728 18:44:13.120760    5241 main.go:141] libmachine: Creating Disk image...
	I0728 18:44:13.120766    5241 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:44:13.120965    5241 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/disk.qcow2
	I0728 18:44:13.130307    5241 main.go:141] libmachine: STDOUT: 
	I0728 18:44:13.130328    5241 main.go:141] libmachine: STDERR: 
	I0728 18:44:13.130385    5241 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/disk.qcow2 +20000M
	I0728 18:44:13.138195    5241 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:44:13.138211    5241 main.go:141] libmachine: STDERR: 
	I0728 18:44:13.138225    5241 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/disk.qcow2
	I0728 18:44:13.138230    5241 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:44:13.138244    5241 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:44:13.138274    5241 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:cd:f9:80:38:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kindnet-496000/disk.qcow2
	I0728 18:44:13.139984    5241 main.go:141] libmachine: STDOUT: 
	I0728 18:44:13.139998    5241 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:44:13.140011    5241 client.go:171] duration metric: took 244.692833ms to LocalClient.Create
	I0728 18:44:15.142168    5241 start.go:128] duration metric: took 2.272354958s to createHost
	I0728 18:44:15.142223    5241 start.go:83] releasing machines lock for "kindnet-496000", held for 2.27247525s
	W0728 18:44:15.142684    5241 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:44:15.154826    5241 out.go:177] 
	W0728 18:44:15.158844    5241 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:44:15.158897    5241 out.go:239] * 
	* 
	W0728 18:44:15.161727    5241 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:44:15.169773    5241 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.067129125s)

                                                
                                                
-- stdout --
	* [calico-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-496000" primary control-plane node in "calico-496000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-496000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:44:17.403519    5356 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:44:17.403752    5356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:44:17.403755    5356 out.go:304] Setting ErrFile to fd 2...
	I0728 18:44:17.403757    5356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:44:17.403882    5356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:44:17.404925    5356 out.go:298] Setting JSON to false
	I0728 18:44:17.421135    5356 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4428,"bootTime":1722213029,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:44:17.421199    5356 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:44:17.427586    5356 out.go:177] * [calico-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:44:17.435530    5356 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:44:17.435565    5356 notify.go:220] Checking for updates...
	I0728 18:44:17.442478    5356 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:44:17.445471    5356 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:44:17.448536    5356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:44:17.451448    5356 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:44:17.454471    5356 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:44:17.457823    5356 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:44:17.457896    5356 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:44:17.457946    5356 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:44:17.462504    5356 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:44:17.469486    5356 start.go:297] selected driver: qemu2
	I0728 18:44:17.469491    5356 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:44:17.469500    5356 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:44:17.471566    5356 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:44:17.475451    5356 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:44:17.478521    5356 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:44:17.478537    5356 cni.go:84] Creating CNI manager for "calico"
	I0728 18:44:17.478541    5356 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0728 18:44:17.478578    5356 start.go:340] cluster config:
	{Name:calico-496000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:44:17.482113    5356 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:44:17.490495    5356 out.go:177] * Starting "calico-496000" primary control-plane node in "calico-496000" cluster
	I0728 18:44:17.494496    5356 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:44:17.494508    5356 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:44:17.494518    5356 cache.go:56] Caching tarball of preloaded images
	I0728 18:44:17.494578    5356 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:44:17.494583    5356 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:44:17.494636    5356 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/calico-496000/config.json ...
	I0728 18:44:17.494647    5356 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/calico-496000/config.json: {Name:mk542da055bc106bb8ea7693c037a4c326d76a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:44:17.494856    5356 start.go:360] acquireMachinesLock for calico-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:44:17.494887    5356 start.go:364] duration metric: took 26.209µs to acquireMachinesLock for "calico-496000"
	I0728 18:44:17.494899    5356 start.go:93] Provisioning new machine with config: &{Name:calico-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:44:17.494926    5356 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:44:17.503516    5356 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:44:17.520392    5356 start.go:159] libmachine.API.Create for "calico-496000" (driver="qemu2")
	I0728 18:44:17.520440    5356 client.go:168] LocalClient.Create starting
	I0728 18:44:17.520532    5356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:44:17.520564    5356 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:17.520578    5356 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:17.520616    5356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:44:17.520639    5356 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:17.520648    5356 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:17.521070    5356 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:44:17.670571    5356 main.go:141] libmachine: Creating SSH key...
	I0728 18:44:17.871522    5356 main.go:141] libmachine: Creating Disk image...
	I0728 18:44:17.871534    5356 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:44:17.871784    5356 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/disk.qcow2
	I0728 18:44:17.881446    5356 main.go:141] libmachine: STDOUT: 
	I0728 18:44:17.881463    5356 main.go:141] libmachine: STDERR: 
	I0728 18:44:17.881512    5356 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/disk.qcow2 +20000M
	I0728 18:44:17.889323    5356 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:44:17.889341    5356 main.go:141] libmachine: STDERR: 
	I0728 18:44:17.889355    5356 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/disk.qcow2
	I0728 18:44:17.889359    5356 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:44:17.889373    5356 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:44:17.889407    5356 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:4e:7d:88:db:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/disk.qcow2
	I0728 18:44:17.891046    5356 main.go:141] libmachine: STDOUT: 
	I0728 18:44:17.891063    5356 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:44:17.891082    5356 client.go:171] duration metric: took 370.641333ms to LocalClient.Create
	I0728 18:44:19.893357    5356 start.go:128] duration metric: took 2.3984245s to createHost
	I0728 18:44:19.893434    5356 start.go:83] releasing machines lock for "calico-496000", held for 2.398563041s
	W0728 18:44:19.893500    5356 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:44:19.900558    5356 out.go:177] * Deleting "calico-496000" in qemu2 ...
	W0728 18:44:19.933522    5356 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:44:19.933551    5356 start.go:729] Will try again in 5 seconds ...
	I0728 18:44:24.935689    5356 start.go:360] acquireMachinesLock for calico-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:44:24.936239    5356 start.go:364] duration metric: took 465.167µs to acquireMachinesLock for "calico-496000"
	I0728 18:44:24.936437    5356 start.go:93] Provisioning new machine with config: &{Name:calico-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:44:24.936684    5356 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:44:24.943331    5356 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:44:24.993694    5356 start.go:159] libmachine.API.Create for "calico-496000" (driver="qemu2")
	I0728 18:44:24.993754    5356 client.go:168] LocalClient.Create starting
	I0728 18:44:24.993880    5356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:44:24.993947    5356 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:24.993964    5356 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:24.994023    5356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:44:24.994069    5356 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:24.994084    5356 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:24.994591    5356 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:44:25.153516    5356 main.go:141] libmachine: Creating SSH key...
	I0728 18:44:25.376920    5356 main.go:141] libmachine: Creating Disk image...
	I0728 18:44:25.376930    5356 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:44:25.377155    5356 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/disk.qcow2
	I0728 18:44:25.386436    5356 main.go:141] libmachine: STDOUT: 
	I0728 18:44:25.386466    5356 main.go:141] libmachine: STDERR: 
	I0728 18:44:25.386523    5356 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/disk.qcow2 +20000M
	I0728 18:44:25.394465    5356 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:44:25.394485    5356 main.go:141] libmachine: STDERR: 
	I0728 18:44:25.394498    5356 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/disk.qcow2
	I0728 18:44:25.394502    5356 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:44:25.394516    5356 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:44:25.394555    5356 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:21:3d:79:f1:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/calico-496000/disk.qcow2
	I0728 18:44:25.396228    5356 main.go:141] libmachine: STDOUT: 
	I0728 18:44:25.396245    5356 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:44:25.396261    5356 client.go:171] duration metric: took 402.504375ms to LocalClient.Create
	I0728 18:44:27.397732    5356 start.go:128] duration metric: took 2.461029292s to createHost
	I0728 18:44:27.397815    5356 start.go:83] releasing machines lock for "calico-496000", held for 2.461579375s
	W0728 18:44:27.398213    5356 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:44:27.412957    5356 out.go:177] 
	W0728 18:44:27.416966    5356 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:44:27.416996    5356 out.go:239] * 
	* 
	W0728 18:44:27.419259    5356 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:44:27.430921    5356 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.84171425s)

                                                
                                                
-- stdout --
	* [custom-flannel-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-496000" primary control-plane node in "custom-flannel-496000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-496000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:44:29.871410    5475 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:44:29.871541    5475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:44:29.871544    5475 out.go:304] Setting ErrFile to fd 2...
	I0728 18:44:29.871547    5475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:44:29.871677    5475 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:44:29.872778    5475 out.go:298] Setting JSON to false
	I0728 18:44:29.889119    5475 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4440,"bootTime":1722213029,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:44:29.889219    5475 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:44:29.896194    5475 out.go:177] * [custom-flannel-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:44:29.903169    5475 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:44:29.903248    5475 notify.go:220] Checking for updates...
	I0728 18:44:29.907592    5475 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:44:29.911133    5475 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:44:29.914148    5475 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:44:29.917164    5475 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:44:29.920203    5475 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:44:29.923458    5475 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:44:29.923527    5475 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:44:29.923571    5475 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:44:29.928156    5475 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:44:29.935171    5475 start.go:297] selected driver: qemu2
	I0728 18:44:29.935176    5475 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:44:29.935181    5475 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:44:29.937275    5475 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:44:29.941176    5475 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:44:29.944269    5475 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:44:29.944296    5475 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0728 18:44:29.944303    5475 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0728 18:44:29.944336    5475 start.go:340] cluster config:
	{Name:custom-flannel-496000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:44:29.947718    5475 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:44:29.956156    5475 out.go:177] * Starting "custom-flannel-496000" primary control-plane node in "custom-flannel-496000" cluster
	I0728 18:44:29.960129    5475 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:44:29.960151    5475 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:44:29.960160    5475 cache.go:56] Caching tarball of preloaded images
	I0728 18:44:29.960222    5475 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:44:29.960227    5475 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:44:29.960277    5475 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/custom-flannel-496000/config.json ...
	I0728 18:44:29.960296    5475 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/custom-flannel-496000/config.json: {Name:mkdb9f209abf31f0249950656ff5f4b2266929f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:44:29.960512    5475 start.go:360] acquireMachinesLock for custom-flannel-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:44:29.960548    5475 start.go:364] duration metric: took 30.625µs to acquireMachinesLock for "custom-flannel-496000"
	I0728 18:44:29.960569    5475 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:44:29.960594    5475 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:44:29.969178    5475 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:44:29.983967    5475 start.go:159] libmachine.API.Create for "custom-flannel-496000" (driver="qemu2")
	I0728 18:44:29.983996    5475 client.go:168] LocalClient.Create starting
	I0728 18:44:29.984060    5475 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:44:29.984090    5475 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:29.984101    5475 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:29.984138    5475 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:44:29.984163    5475 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:29.984170    5475 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:29.984535    5475 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:44:30.132398    5475 main.go:141] libmachine: Creating SSH key...
	I0728 18:44:30.248497    5475 main.go:141] libmachine: Creating Disk image...
	I0728 18:44:30.248507    5475 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:44:30.248723    5475 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/disk.qcow2
	I0728 18:44:30.258143    5475 main.go:141] libmachine: STDOUT: 
	I0728 18:44:30.258174    5475 main.go:141] libmachine: STDERR: 
	I0728 18:44:30.258243    5475 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/disk.qcow2 +20000M
	I0728 18:44:30.266640    5475 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:44:30.266656    5475 main.go:141] libmachine: STDERR: 
	I0728 18:44:30.266675    5475 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/disk.qcow2
	I0728 18:44:30.266682    5475 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:44:30.266693    5475 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:44:30.266716    5475 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:5c:25:73:b7:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/disk.qcow2
	I0728 18:44:30.268447    5475 main.go:141] libmachine: STDOUT: 
	I0728 18:44:30.268461    5475 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:44:30.268480    5475 client.go:171] duration metric: took 284.482875ms to LocalClient.Create
	I0728 18:44:32.270661    5475 start.go:128] duration metric: took 2.310064917s to createHost
	I0728 18:44:32.270715    5475 start.go:83] releasing machines lock for "custom-flannel-496000", held for 2.310184s
	W0728 18:44:32.270778    5475 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:44:32.283478    5475 out.go:177] * Deleting "custom-flannel-496000" in qemu2 ...
	W0728 18:44:32.306396    5475 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:44:32.306415    5475 start.go:729] Will try again in 5 seconds ...
	I0728 18:44:37.308534    5475 start.go:360] acquireMachinesLock for custom-flannel-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:44:37.308824    5475 start.go:364] duration metric: took 228.625µs to acquireMachinesLock for "custom-flannel-496000"
	I0728 18:44:37.308858    5475 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:44:37.309002    5475 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:44:37.317404    5475 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:44:37.350439    5475 start.go:159] libmachine.API.Create for "custom-flannel-496000" (driver="qemu2")
	I0728 18:44:37.350487    5475 client.go:168] LocalClient.Create starting
	I0728 18:44:37.350601    5475 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:44:37.350658    5475 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:37.350673    5475 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:37.350729    5475 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:44:37.350767    5475 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:37.350776    5475 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:37.351272    5475 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:44:37.503391    5475 main.go:141] libmachine: Creating SSH key...
	I0728 18:44:37.619897    5475 main.go:141] libmachine: Creating Disk image...
	I0728 18:44:37.619908    5475 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:44:37.620136    5475 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/disk.qcow2
	I0728 18:44:37.629545    5475 main.go:141] libmachine: STDOUT: 
	I0728 18:44:37.629570    5475 main.go:141] libmachine: STDERR: 
	I0728 18:44:37.629617    5475 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/disk.qcow2 +20000M
	I0728 18:44:37.637662    5475 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:44:37.637677    5475 main.go:141] libmachine: STDERR: 
	I0728 18:44:37.637690    5475 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/disk.qcow2
	I0728 18:44:37.637699    5475 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:44:37.637710    5475 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:44:37.637735    5475 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:03:0c:ef:d4:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/custom-flannel-496000/disk.qcow2
	I0728 18:44:37.639402    5475 main.go:141] libmachine: STDOUT: 
	I0728 18:44:37.639418    5475 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:44:37.639429    5475 client.go:171] duration metric: took 288.939958ms to LocalClient.Create
	I0728 18:44:39.641570    5475 start.go:128] duration metric: took 2.332569917s to createHost
	I0728 18:44:39.641628    5475 start.go:83] releasing machines lock for "custom-flannel-496000", held for 2.3328125s
	W0728 18:44:39.642016    5475 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:44:39.651412    5475 out.go:177] 
	W0728 18:44:39.656425    5475 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:44:39.656444    5475 out.go:239] * 
	* 
	W0728 18:44:39.658330    5475 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:44:39.668374    5475 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.955674333s)

                                                
                                                
-- stdout --
	* [false-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-496000" primary control-plane node in "false-496000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-496000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:44:42.024922    5596 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:44:42.025067    5596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:44:42.025075    5596 out.go:304] Setting ErrFile to fd 2...
	I0728 18:44:42.025078    5596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:44:42.025213    5596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:44:42.026179    5596 out.go:298] Setting JSON to false
	I0728 18:44:42.042234    5596 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4453,"bootTime":1722213029,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:44:42.042306    5596 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:44:42.048257    5596 out.go:177] * [false-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:44:42.055948    5596 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:44:42.055996    5596 notify.go:220] Checking for updates...
	I0728 18:44:42.063043    5596 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:44:42.064430    5596 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:44:42.068067    5596 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:44:42.071068    5596 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:44:42.072480    5596 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:44:42.076462    5596 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:44:42.076531    5596 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:44:42.076581    5596 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:44:42.081046    5596 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:44:42.086068    5596 start.go:297] selected driver: qemu2
	I0728 18:44:42.086076    5596 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:44:42.086083    5596 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:44:42.088553    5596 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:44:42.093047    5596 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:44:42.094606    5596 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:44:42.094620    5596 cni.go:84] Creating CNI manager for "false"
	I0728 18:44:42.094645    5596 start.go:340] cluster config:
	{Name:false-496000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:44:42.098326    5596 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:44:42.106092    5596 out.go:177] * Starting "false-496000" primary control-plane node in "false-496000" cluster
	I0728 18:44:42.110027    5596 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:44:42.110043    5596 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:44:42.110051    5596 cache.go:56] Caching tarball of preloaded images
	I0728 18:44:42.110107    5596 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:44:42.110112    5596 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:44:42.110174    5596 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/false-496000/config.json ...
	I0728 18:44:42.110186    5596 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/false-496000/config.json: {Name:mkbeccaeb2850aefc44e309aa2a5f869738b8fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:44:42.110411    5596 start.go:360] acquireMachinesLock for false-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:44:42.110444    5596 start.go:364] duration metric: took 27.834µs to acquireMachinesLock for "false-496000"
	I0728 18:44:42.110457    5596 start.go:93] Provisioning new machine with config: &{Name:false-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:44:42.110487    5596 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:44:42.118057    5596 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:44:42.134389    5596 start.go:159] libmachine.API.Create for "false-496000" (driver="qemu2")
	I0728 18:44:42.134413    5596 client.go:168] LocalClient.Create starting
	I0728 18:44:42.134482    5596 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:44:42.134513    5596 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:42.134521    5596 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:42.134561    5596 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:44:42.134584    5596 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:42.134591    5596 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:42.134990    5596 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:44:42.282581    5596 main.go:141] libmachine: Creating SSH key...
	I0728 18:44:42.531022    5596 main.go:141] libmachine: Creating Disk image...
	I0728 18:44:42.531035    5596 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:44:42.531279    5596 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/disk.qcow2
	I0728 18:44:42.540689    5596 main.go:141] libmachine: STDOUT: 
	I0728 18:44:42.540718    5596 main.go:141] libmachine: STDERR: 
	I0728 18:44:42.540784    5596 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/disk.qcow2 +20000M
	I0728 18:44:42.548862    5596 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:44:42.548876    5596 main.go:141] libmachine: STDERR: 
	I0728 18:44:42.548889    5596 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/disk.qcow2
	I0728 18:44:42.548895    5596 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:44:42.548907    5596 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:44:42.548939    5596 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:4f:d3:92:55:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/disk.qcow2
	I0728 18:44:42.550601    5596 main.go:141] libmachine: STDOUT: 
	I0728 18:44:42.550615    5596 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:44:42.550634    5596 client.go:171] duration metric: took 416.2205ms to LocalClient.Create
	I0728 18:44:44.552780    5596 start.go:128] duration metric: took 2.442294834s to createHost
	I0728 18:44:44.552842    5596 start.go:83] releasing machines lock for "false-496000", held for 2.442415916s
	W0728 18:44:44.552885    5596 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:44:44.562603    5596 out.go:177] * Deleting "false-496000" in qemu2 ...
	W0728 18:44:44.584856    5596 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:44:44.584896    5596 start.go:729] Will try again in 5 seconds ...
	I0728 18:44:49.586657    5596 start.go:360] acquireMachinesLock for false-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:44:49.587047    5596 start.go:364] duration metric: took 335.458µs to acquireMachinesLock for "false-496000"
	I0728 18:44:49.587193    5596 start.go:93] Provisioning new machine with config: &{Name:false-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:44:49.587461    5596 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:44:49.596085    5596 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:44:49.638197    5596 start.go:159] libmachine.API.Create for "false-496000" (driver="qemu2")
	I0728 18:44:49.638242    5596 client.go:168] LocalClient.Create starting
	I0728 18:44:49.638337    5596 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:44:49.638390    5596 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:49.638402    5596 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:49.638465    5596 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:44:49.638504    5596 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:49.638515    5596 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:49.639090    5596 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:44:49.791775    5596 main.go:141] libmachine: Creating SSH key...
	I0728 18:44:49.895611    5596 main.go:141] libmachine: Creating Disk image...
	I0728 18:44:49.895623    5596 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:44:49.895850    5596 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/disk.qcow2
	I0728 18:44:49.905248    5596 main.go:141] libmachine: STDOUT: 
	I0728 18:44:49.905262    5596 main.go:141] libmachine: STDERR: 
	I0728 18:44:49.905312    5596 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/disk.qcow2 +20000M
	I0728 18:44:49.913255    5596 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:44:49.913269    5596 main.go:141] libmachine: STDERR: 
	I0728 18:44:49.913281    5596 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/disk.qcow2
	I0728 18:44:49.913286    5596 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:44:49.913298    5596 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:44:49.913331    5596 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:1d:61:e4:89:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/false-496000/disk.qcow2
	I0728 18:44:49.915035    5596 main.go:141] libmachine: STDOUT: 
	I0728 18:44:49.915047    5596 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:44:49.915059    5596 client.go:171] duration metric: took 276.815458ms to LocalClient.Create
	I0728 18:44:51.916979    5596 start.go:128] duration metric: took 2.32952375s to createHost
	I0728 18:44:51.917006    5596 start.go:83] releasing machines lock for "false-496000", held for 2.329969292s
	W0728 18:44:51.917092    5596 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:44:51.927372    5596 out.go:177] 
	W0728 18:44:51.930369    5596 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:44:51.930376    5596 out.go:239] * 
	* 
	W0728 18:44:51.930832    5596 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:44:51.943335    5596 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
E0728 18:44:56.725008    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.832317625s)

                                                
                                                
-- stdout --
	* [enable-default-cni-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-496000" primary control-plane node in "enable-default-cni-496000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-496000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:44:54.077476    5705 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:44:54.077624    5705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:44:54.077627    5705 out.go:304] Setting ErrFile to fd 2...
	I0728 18:44:54.077629    5705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:44:54.077774    5705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:44:54.078961    5705 out.go:298] Setting JSON to false
	I0728 18:44:54.095454    5705 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4465,"bootTime":1722213029,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:44:54.095529    5705 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:44:54.100906    5705 out.go:177] * [enable-default-cni-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:44:54.108950    5705 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:44:54.109009    5705 notify.go:220] Checking for updates...
	I0728 18:44:54.116920    5705 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:44:54.119862    5705 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:44:54.123850    5705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:44:54.127887    5705 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:44:54.131865    5705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:44:54.136204    5705 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:44:54.136276    5705 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:44:54.136331    5705 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:44:54.139992    5705 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:44:54.151895    5705 start.go:297] selected driver: qemu2
	I0728 18:44:54.151902    5705 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:44:54.151910    5705 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:44:54.154255    5705 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:44:54.157726    5705 out.go:177] * Automatically selected the socket_vmnet network
	E0728 18:44:54.161964    5705 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0728 18:44:54.161976    5705 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:44:54.161993    5705 cni.go:84] Creating CNI manager for "bridge"
	I0728 18:44:54.161998    5705 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 18:44:54.162038    5705 start.go:340] cluster config:
	{Name:enable-default-cni-496000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:44:54.165796    5705 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:44:54.173866    5705 out.go:177] * Starting "enable-default-cni-496000" primary control-plane node in "enable-default-cni-496000" cluster
	I0728 18:44:54.177900    5705 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:44:54.177917    5705 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:44:54.177931    5705 cache.go:56] Caching tarball of preloaded images
	I0728 18:44:54.178008    5705 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:44:54.178014    5705 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:44:54.178095    5705 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/enable-default-cni-496000/config.json ...
	I0728 18:44:54.178107    5705 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/enable-default-cni-496000/config.json: {Name:mkb16ff01491ae662ca5a759c683b1de42c02225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:44:54.178492    5705 start.go:360] acquireMachinesLock for enable-default-cni-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:44:54.178529    5705 start.go:364] duration metric: took 27.209µs to acquireMachinesLock for "enable-default-cni-496000"
	I0728 18:44:54.178541    5705 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:44:54.178568    5705 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:44:54.181936    5705 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:44:54.197957    5705 start.go:159] libmachine.API.Create for "enable-default-cni-496000" (driver="qemu2")
	I0728 18:44:54.197984    5705 client.go:168] LocalClient.Create starting
	I0728 18:44:54.198047    5705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:44:54.198076    5705 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:54.198086    5705 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:54.198119    5705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:44:54.198144    5705 main.go:141] libmachine: Decoding PEM data...
	I0728 18:44:54.198151    5705 main.go:141] libmachine: Parsing certificate...
	I0728 18:44:54.198541    5705 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:44:54.376831    5705 main.go:141] libmachine: Creating SSH key...
	I0728 18:44:54.511356    5705 main.go:141] libmachine: Creating Disk image...
	I0728 18:44:54.511363    5705 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:44:54.511555    5705 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/disk.qcow2
	I0728 18:44:54.520990    5705 main.go:141] libmachine: STDOUT: 
	I0728 18:44:54.521053    5705 main.go:141] libmachine: STDERR: 
	I0728 18:44:54.521112    5705 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/disk.qcow2 +20000M
	I0728 18:44:54.528899    5705 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:44:54.528915    5705 main.go:141] libmachine: STDERR: 
	I0728 18:44:54.528932    5705 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/disk.qcow2
	I0728 18:44:54.528940    5705 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:44:54.528953    5705 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:44:54.528987    5705 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:66:27:67:0f:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/disk.qcow2
	I0728 18:44:54.530602    5705 main.go:141] libmachine: STDOUT: 
	I0728 18:44:54.530653    5705 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:44:54.530675    5705 client.go:171] duration metric: took 332.686042ms to LocalClient.Create
	I0728 18:44:56.532743    5705 start.go:128] duration metric: took 2.354186375s to createHost
	I0728 18:44:56.532783    5705 start.go:83] releasing machines lock for "enable-default-cni-496000", held for 2.354272625s
	W0728 18:44:56.532805    5705 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:44:56.542624    5705 out.go:177] * Deleting "enable-default-cni-496000" in qemu2 ...
	W0728 18:44:56.557875    5705 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:44:56.557885    5705 start.go:729] Will try again in 5 seconds ...
	I0728 18:45:01.559962    5705 start.go:360] acquireMachinesLock for enable-default-cni-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:45:01.560252    5705 start.go:364] duration metric: took 220.542µs to acquireMachinesLock for "enable-default-cni-496000"
	I0728 18:45:01.560302    5705 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:45:01.560381    5705 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:45:01.566714    5705 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:45:01.589498    5705 start.go:159] libmachine.API.Create for "enable-default-cni-496000" (driver="qemu2")
	I0728 18:45:01.589527    5705 client.go:168] LocalClient.Create starting
	I0728 18:45:01.589605    5705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:45:01.589650    5705 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:01.589658    5705 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:01.589696    5705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:45:01.589724    5705 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:01.589730    5705 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:01.590182    5705 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:45:01.739865    5705 main.go:141] libmachine: Creating SSH key...
	I0728 18:45:01.820100    5705 main.go:141] libmachine: Creating Disk image...
	I0728 18:45:01.820107    5705 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:45:01.820340    5705 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/disk.qcow2
	I0728 18:45:01.829645    5705 main.go:141] libmachine: STDOUT: 
	I0728 18:45:01.829667    5705 main.go:141] libmachine: STDERR: 
	I0728 18:45:01.829721    5705 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/disk.qcow2 +20000M
	I0728 18:45:01.837848    5705 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:45:01.837867    5705 main.go:141] libmachine: STDERR: 
	I0728 18:45:01.837889    5705 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/disk.qcow2
	I0728 18:45:01.837893    5705 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:45:01.837907    5705 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:45:01.837932    5705 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:93:31:ab:5b:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/enable-default-cni-496000/disk.qcow2
	I0728 18:45:01.839555    5705 main.go:141] libmachine: STDOUT: 
	I0728 18:45:01.839570    5705 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:45:01.839583    5705 client.go:171] duration metric: took 250.055958ms to LocalClient.Create
	I0728 18:45:03.841741    5705 start.go:128] duration metric: took 2.281351917s to createHost
	I0728 18:45:03.841839    5705 start.go:83] releasing machines lock for "enable-default-cni-496000", held for 2.2815975s
	W0728 18:45:03.842222    5705 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:45:03.851530    5705 out.go:177] 
	W0728 18:45:03.855743    5705 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:45:03.855767    5705 out.go:239] * 
	* 
	W0728 18:45:03.857273    5705 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:45:03.867744    5705 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.87128775s)

                                                
                                                
-- stdout --
	* [flannel-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-496000" primary control-plane node in "flannel-496000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-496000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:45:06.026090    5814 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:45:06.026220    5814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:45:06.026223    5814 out.go:304] Setting ErrFile to fd 2...
	I0728 18:45:06.026228    5814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:45:06.026346    5814 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:45:06.027408    5814 out.go:298] Setting JSON to false
	I0728 18:45:06.043419    5814 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4477,"bootTime":1722213029,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:45:06.043483    5814 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:45:06.050211    5814 out.go:177] * [flannel-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:45:06.058155    5814 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:45:06.058200    5814 notify.go:220] Checking for updates...
	I0728 18:45:06.065083    5814 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:45:06.069101    5814 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:45:06.072077    5814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:45:06.075053    5814 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:45:06.078068    5814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:45:06.081420    5814 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:45:06.081485    5814 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:45:06.081531    5814 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:45:06.086115    5814 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:45:06.092076    5814 start.go:297] selected driver: qemu2
	I0728 18:45:06.092081    5814 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:45:06.092087    5814 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:45:06.094267    5814 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:45:06.098104    5814 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:45:06.101169    5814 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:45:06.101184    5814 cni.go:84] Creating CNI manager for "flannel"
	I0728 18:45:06.101187    5814 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0728 18:45:06.101221    5814 start.go:340] cluster config:
	{Name:flannel-496000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:45:06.104647    5814 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:45:06.112133    5814 out.go:177] * Starting "flannel-496000" primary control-plane node in "flannel-496000" cluster
	I0728 18:45:06.116124    5814 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:45:06.116139    5814 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:45:06.116152    5814 cache.go:56] Caching tarball of preloaded images
	I0728 18:45:06.116227    5814 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:45:06.116240    5814 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:45:06.116294    5814 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/flannel-496000/config.json ...
	I0728 18:45:06.116311    5814 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/flannel-496000/config.json: {Name:mk496983e1933ea22cd3006de6deedf228518707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:45:06.116521    5814 start.go:360] acquireMachinesLock for flannel-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:45:06.116553    5814 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "flannel-496000"
	I0728 18:45:06.116565    5814 start.go:93] Provisioning new machine with config: &{Name:flannel-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:45:06.116613    5814 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:45:06.125115    5814 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:45:06.141873    5814 start.go:159] libmachine.API.Create for "flannel-496000" (driver="qemu2")
	I0728 18:45:06.141903    5814 client.go:168] LocalClient.Create starting
	I0728 18:45:06.141968    5814 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:45:06.142003    5814 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:06.142011    5814 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:06.142052    5814 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:45:06.142074    5814 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:06.142081    5814 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:06.142432    5814 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:45:06.287347    5814 main.go:141] libmachine: Creating SSH key...
	I0728 18:45:06.379862    5814 main.go:141] libmachine: Creating Disk image...
	I0728 18:45:06.379868    5814 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:45:06.380073    5814 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/disk.qcow2
	I0728 18:45:06.389262    5814 main.go:141] libmachine: STDOUT: 
	I0728 18:45:06.389282    5814 main.go:141] libmachine: STDERR: 
	I0728 18:45:06.389338    5814 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/disk.qcow2 +20000M
	I0728 18:45:06.397496    5814 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:45:06.397510    5814 main.go:141] libmachine: STDERR: 
	I0728 18:45:06.397533    5814 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/disk.qcow2
	I0728 18:45:06.397539    5814 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:45:06.397552    5814 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:45:06.397592    5814 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:94:25:08:bf:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/disk.qcow2
	I0728 18:45:06.399180    5814 main.go:141] libmachine: STDOUT: 
	I0728 18:45:06.399198    5814 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:45:06.399217    5814 client.go:171] duration metric: took 257.312459ms to LocalClient.Create
	I0728 18:45:08.401393    5814 start.go:128] duration metric: took 2.284774292s to createHost
	I0728 18:45:08.401577    5814 start.go:83] releasing machines lock for "flannel-496000", held for 2.285037416s
	W0728 18:45:08.401633    5814 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:45:08.415811    5814 out.go:177] * Deleting "flannel-496000" in qemu2 ...
	W0728 18:45:08.442138    5814 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:45:08.442228    5814 start.go:729] Will try again in 5 seconds ...
	I0728 18:45:13.444354    5814 start.go:360] acquireMachinesLock for flannel-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:45:13.444814    5814 start.go:364] duration metric: took 383.5µs to acquireMachinesLock for "flannel-496000"
	I0728 18:45:13.444951    5814 start.go:93] Provisioning new machine with config: &{Name:flannel-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:45:13.445288    5814 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:45:13.451911    5814 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:45:13.500756    5814 start.go:159] libmachine.API.Create for "flannel-496000" (driver="qemu2")
	I0728 18:45:13.500808    5814 client.go:168] LocalClient.Create starting
	I0728 18:45:13.500924    5814 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:45:13.500992    5814 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:13.501017    5814 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:13.501115    5814 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:45:13.501161    5814 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:13.501181    5814 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:13.501729    5814 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:45:13.658626    5814 main.go:141] libmachine: Creating SSH key...
	I0728 18:45:13.813201    5814 main.go:141] libmachine: Creating Disk image...
	I0728 18:45:13.813209    5814 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:45:13.813476    5814 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/disk.qcow2
	I0728 18:45:13.823303    5814 main.go:141] libmachine: STDOUT: 
	I0728 18:45:13.823337    5814 main.go:141] libmachine: STDERR: 
	I0728 18:45:13.823395    5814 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/disk.qcow2 +20000M
	I0728 18:45:13.831496    5814 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:45:13.831510    5814 main.go:141] libmachine: STDERR: 
	I0728 18:45:13.831521    5814 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/disk.qcow2
	I0728 18:45:13.831526    5814 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:45:13.831538    5814 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:45:13.831563    5814 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:f4:95:e9:3c:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/flannel-496000/disk.qcow2
	I0728 18:45:13.833197    5814 main.go:141] libmachine: STDOUT: 
	I0728 18:45:13.833214    5814 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:45:13.833228    5814 client.go:171] duration metric: took 332.416125ms to LocalClient.Create
	I0728 18:45:15.835286    5814 start.go:128] duration metric: took 2.390002125s to createHost
	I0728 18:45:15.835344    5814 start.go:83] releasing machines lock for "flannel-496000", held for 2.3905345s
	W0728 18:45:15.835467    5814 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:45:15.845755    5814 out.go:177] 
	W0728 18:45:15.852671    5814 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:45:15.852677    5814 out.go:239] * 
	* 
	W0728 18:45:15.853270    5814 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:45:15.861678    5814 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.782338959s)

                                                
                                                
-- stdout --
	* [bridge-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-496000" primary control-plane node in "bridge-496000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-496000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:45:18.202245    5932 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:45:18.202393    5932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:45:18.202396    5932 out.go:304] Setting ErrFile to fd 2...
	I0728 18:45:18.202399    5932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:45:18.202528    5932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:45:18.203517    5932 out.go:298] Setting JSON to false
	I0728 18:45:18.219791    5932 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4489,"bootTime":1722213029,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:45:18.219864    5932 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:45:18.225124    5932 out.go:177] * [bridge-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:45:18.232106    5932 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:45:18.232182    5932 notify.go:220] Checking for updates...
	I0728 18:45:18.240137    5932 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:45:18.243148    5932 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:45:18.247128    5932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:45:18.250153    5932 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:45:18.253115    5932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:45:18.256399    5932 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:45:18.256470    5932 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:45:18.256520    5932 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:45:18.260153    5932 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:45:18.267121    5932 start.go:297] selected driver: qemu2
	I0728 18:45:18.267126    5932 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:45:18.267132    5932 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:45:18.269216    5932 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:45:18.273148    5932 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:45:18.276190    5932 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:45:18.276204    5932 cni.go:84] Creating CNI manager for "bridge"
	I0728 18:45:18.276212    5932 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 18:45:18.276243    5932 start.go:340] cluster config:
	{Name:bridge-496000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:45:18.279478    5932 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:45:18.288004    5932 out.go:177] * Starting "bridge-496000" primary control-plane node in "bridge-496000" cluster
	I0728 18:45:18.292102    5932 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:45:18.292116    5932 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:45:18.292129    5932 cache.go:56] Caching tarball of preloaded images
	I0728 18:45:18.292189    5932 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:45:18.292197    5932 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:45:18.292270    5932 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/bridge-496000/config.json ...
	I0728 18:45:18.292282    5932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/bridge-496000/config.json: {Name:mk27407bf4eab6c4d49c02613a7003ca96f44175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:45:18.292485    5932 start.go:360] acquireMachinesLock for bridge-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:45:18.292514    5932 start.go:364] duration metric: took 24.625µs to acquireMachinesLock for "bridge-496000"
	I0728 18:45:18.292526    5932 start.go:93] Provisioning new machine with config: &{Name:bridge-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:45:18.292554    5932 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:45:18.300113    5932 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:45:18.315196    5932 start.go:159] libmachine.API.Create for "bridge-496000" (driver="qemu2")
	I0728 18:45:18.315220    5932 client.go:168] LocalClient.Create starting
	I0728 18:45:18.315298    5932 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:45:18.315327    5932 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:18.315336    5932 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:18.315379    5932 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:45:18.315405    5932 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:18.315417    5932 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:18.315739    5932 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:45:18.461967    5932 main.go:141] libmachine: Creating SSH key...
	I0728 18:45:18.549629    5932 main.go:141] libmachine: Creating Disk image...
	I0728 18:45:18.549636    5932 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:45:18.549843    5932 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/disk.qcow2
	I0728 18:45:18.559022    5932 main.go:141] libmachine: STDOUT: 
	I0728 18:45:18.559039    5932 main.go:141] libmachine: STDERR: 
	I0728 18:45:18.559105    5932 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/disk.qcow2 +20000M
	I0728 18:45:18.566791    5932 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:45:18.566805    5932 main.go:141] libmachine: STDERR: 
	I0728 18:45:18.566822    5932 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/disk.qcow2
	I0728 18:45:18.566826    5932 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:45:18.566846    5932 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:45:18.566878    5932 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:de:35:88:c1:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/disk.qcow2
	I0728 18:45:18.568469    5932 main.go:141] libmachine: STDOUT: 
	I0728 18:45:18.568486    5932 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:45:18.568503    5932 client.go:171] duration metric: took 253.28075ms to LocalClient.Create
	I0728 18:45:20.570695    5932 start.go:128] duration metric: took 2.278134791s to createHost
	I0728 18:45:20.570806    5932 start.go:83] releasing machines lock for "bridge-496000", held for 2.278305292s
	W0728 18:45:20.570881    5932 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:45:20.582012    5932 out.go:177] * Deleting "bridge-496000" in qemu2 ...
	W0728 18:45:20.613104    5932 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:45:20.613142    5932 start.go:729] Will try again in 5 seconds ...
	I0728 18:45:25.615275    5932 start.go:360] acquireMachinesLock for bridge-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:45:25.615911    5932 start.go:364] duration metric: took 512.5µs to acquireMachinesLock for "bridge-496000"
	I0728 18:45:25.616112    5932 start.go:93] Provisioning new machine with config: &{Name:bridge-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:45:25.616450    5932 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:45:25.623145    5932 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:45:25.667389    5932 start.go:159] libmachine.API.Create for "bridge-496000" (driver="qemu2")
	I0728 18:45:25.667444    5932 client.go:168] LocalClient.Create starting
	I0728 18:45:25.667544    5932 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:45:25.667609    5932 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:25.667630    5932 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:25.667720    5932 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:45:25.667764    5932 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:25.667781    5932 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:25.668564    5932 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:45:25.822318    5932 main.go:141] libmachine: Creating SSH key...
	I0728 18:45:25.895437    5932 main.go:141] libmachine: Creating Disk image...
	I0728 18:45:25.895442    5932 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:45:25.895654    5932 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/disk.qcow2
	I0728 18:45:25.905067    5932 main.go:141] libmachine: STDOUT: 
	I0728 18:45:25.905084    5932 main.go:141] libmachine: STDERR: 
	I0728 18:45:25.905141    5932 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/disk.qcow2 +20000M
	I0728 18:45:25.913252    5932 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:45:25.913273    5932 main.go:141] libmachine: STDERR: 
	I0728 18:45:25.913287    5932 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/disk.qcow2
	I0728 18:45:25.913295    5932 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:45:25.913305    5932 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:45:25.913331    5932 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:a4:5a:a5:7e:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/bridge-496000/disk.qcow2
	I0728 18:45:25.914980    5932 main.go:141] libmachine: STDOUT: 
	I0728 18:45:25.914993    5932 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:45:25.915004    5932 client.go:171] duration metric: took 247.556667ms to LocalClient.Create
	I0728 18:45:27.917185    5932 start.go:128] duration metric: took 2.300726125s to createHost
	I0728 18:45:27.917287    5932 start.go:83] releasing machines lock for "bridge-496000", held for 2.301342875s
	W0728 18:45:27.917599    5932 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:45:27.929232    5932 out.go:177] 
	W0728 18:45:27.933275    5932 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:45:27.933335    5932 out.go:239] * 
	* 
	W0728 18:45:27.935501    5932 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:45:27.946123    5932 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-496000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.731166291s)

                                                
                                                
-- stdout --
	* [kubenet-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-496000" primary control-plane node in "kubenet-496000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-496000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:45:30.092779    6043 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:45:30.092900    6043 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:45:30.092903    6043 out.go:304] Setting ErrFile to fd 2...
	I0728 18:45:30.092905    6043 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:45:30.093035    6043 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:45:30.094076    6043 out.go:298] Setting JSON to false
	I0728 18:45:30.110003    6043 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4501,"bootTime":1722213029,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:45:30.110073    6043 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:45:30.116538    6043 out.go:177] * [kubenet-496000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:45:30.124353    6043 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:45:30.124385    6043 notify.go:220] Checking for updates...
	I0728 18:45:30.131355    6043 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:45:30.134358    6043 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:45:30.138321    6043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:45:30.141418    6043 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:45:30.144413    6043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:45:30.147729    6043 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:45:30.147801    6043 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:45:30.147854    6043 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:45:30.151308    6043 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:45:30.158385    6043 start.go:297] selected driver: qemu2
	I0728 18:45:30.158390    6043 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:45:30.158395    6043 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:45:30.160408    6043 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:45:30.163335    6043 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:45:30.166415    6043 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:45:30.166448    6043 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0728 18:45:30.166478    6043 start.go:340] cluster config:
	{Name:kubenet-496000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:45:30.169883    6043 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:45:30.178380    6043 out.go:177] * Starting "kubenet-496000" primary control-plane node in "kubenet-496000" cluster
	I0728 18:45:30.182390    6043 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:45:30.182413    6043 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:45:30.182430    6043 cache.go:56] Caching tarball of preloaded images
	I0728 18:45:30.182495    6043 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:45:30.182500    6043 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:45:30.182570    6043 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/kubenet-496000/config.json ...
	I0728 18:45:30.182586    6043 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/kubenet-496000/config.json: {Name:mkf3d8cf0dfbcec25e14387ba542c11e0f123fde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:45:30.182781    6043 start.go:360] acquireMachinesLock for kubenet-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:45:30.182810    6043 start.go:364] duration metric: took 24.083µs to acquireMachinesLock for "kubenet-496000"
	I0728 18:45:30.182822    6043 start.go:93] Provisioning new machine with config: &{Name:kubenet-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:45:30.182858    6043 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:45:30.190376    6043 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:45:30.205181    6043 start.go:159] libmachine.API.Create for "kubenet-496000" (driver="qemu2")
	I0728 18:45:30.205222    6043 client.go:168] LocalClient.Create starting
	I0728 18:45:30.205286    6043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:45:30.205319    6043 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:30.205329    6043 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:30.205365    6043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:45:30.205386    6043 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:30.205393    6043 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:30.205772    6043 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:45:30.350849    6043 main.go:141] libmachine: Creating SSH key...
	I0728 18:45:30.405323    6043 main.go:141] libmachine: Creating Disk image...
	I0728 18:45:30.405330    6043 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:45:30.405536    6043 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/disk.qcow2
	I0728 18:45:30.414835    6043 main.go:141] libmachine: STDOUT: 
	I0728 18:45:30.414854    6043 main.go:141] libmachine: STDERR: 
	I0728 18:45:30.414899    6043 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/disk.qcow2 +20000M
	I0728 18:45:30.422773    6043 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:45:30.422829    6043 main.go:141] libmachine: STDERR: 
	I0728 18:45:30.422847    6043 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/disk.qcow2
	I0728 18:45:30.422853    6043 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:45:30.422865    6043 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:45:30.422888    6043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:e4:e2:3c:8b:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/disk.qcow2
	I0728 18:45:30.424592    6043 main.go:141] libmachine: STDOUT: 
	I0728 18:45:30.424608    6043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:45:30.424627    6043 client.go:171] duration metric: took 219.402542ms to LocalClient.Create
	I0728 18:45:32.426796    6043 start.go:128] duration metric: took 2.2439375s to createHost
	I0728 18:45:32.426866    6043 start.go:83] releasing machines lock for "kubenet-496000", held for 2.244069709s
	W0728 18:45:32.426961    6043 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:45:32.440298    6043 out.go:177] * Deleting "kubenet-496000" in qemu2 ...
	W0728 18:45:32.464139    6043 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:45:32.464174    6043 start.go:729] Will try again in 5 seconds ...
	I0728 18:45:37.466279    6043 start.go:360] acquireMachinesLock for kubenet-496000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:45:37.466723    6043 start.go:364] duration metric: took 370.417µs to acquireMachinesLock for "kubenet-496000"
	I0728 18:45:37.466775    6043 start.go:93] Provisioning new machine with config: &{Name:kubenet-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:45:37.466960    6043 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:45:37.474523    6043 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0728 18:45:37.514270    6043 start.go:159] libmachine.API.Create for "kubenet-496000" (driver="qemu2")
	I0728 18:45:37.514322    6043 client.go:168] LocalClient.Create starting
	I0728 18:45:37.514435    6043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:45:37.514499    6043 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:37.514516    6043 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:37.514566    6043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:45:37.514607    6043 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:37.514618    6043 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:37.515173    6043 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:45:37.665815    6043 main.go:141] libmachine: Creating SSH key...
	I0728 18:45:37.726449    6043 main.go:141] libmachine: Creating Disk image...
	I0728 18:45:37.726456    6043 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:45:37.726687    6043 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/disk.qcow2
	I0728 18:45:37.736599    6043 main.go:141] libmachine: STDOUT: 
	I0728 18:45:37.736617    6043 main.go:141] libmachine: STDERR: 
	I0728 18:45:37.736674    6043 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/disk.qcow2 +20000M
	I0728 18:45:37.744757    6043 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:45:37.744774    6043 main.go:141] libmachine: STDERR: 
	I0728 18:45:37.744787    6043 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/disk.qcow2
	I0728 18:45:37.744791    6043 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:45:37.744801    6043 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:45:37.744840    6043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:b9:d5:a7:6c:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/kubenet-496000/disk.qcow2
	I0728 18:45:37.746402    6043 main.go:141] libmachine: STDOUT: 
	I0728 18:45:37.746416    6043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:45:37.746436    6043 client.go:171] duration metric: took 232.103541ms to LocalClient.Create
	I0728 18:45:39.748468    6043 start.go:128] duration metric: took 2.281521084s to createHost
	I0728 18:45:39.748480    6043 start.go:83] releasing machines lock for "kubenet-496000", held for 2.281767084s
	W0728 18:45:39.748553    6043 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:45:39.764039    6043 out.go:177] 
	W0728 18:45:39.768095    6043 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:45:39.768100    6043 out.go:239] * 
	* 
	W0728 18:45:39.768619    6043 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:45:39.779048    6043 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-260000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-260000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.849872791s)

                                                
                                                
-- stdout --
	* [old-k8s-version-260000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-260000" primary control-plane node in "old-k8s-version-260000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-260000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:45:41.938911    6158 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:45:41.939052    6158 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:45:41.939056    6158 out.go:304] Setting ErrFile to fd 2...
	I0728 18:45:41.939066    6158 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:45:41.939208    6158 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:45:41.940275    6158 out.go:298] Setting JSON to false
	I0728 18:45:41.956544    6158 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4512,"bootTime":1722213029,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:45:41.956608    6158 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:45:41.962481    6158 out.go:177] * [old-k8s-version-260000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:45:41.971230    6158 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:45:41.971288    6158 notify.go:220] Checking for updates...
	I0728 18:45:41.980123    6158 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:45:41.983243    6158 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:45:41.984779    6158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:45:41.988209    6158 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:45:41.991181    6158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:45:41.994462    6158 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:45:41.994534    6158 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:45:41.994582    6158 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:45:41.999153    6158 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:45:42.006177    6158 start.go:297] selected driver: qemu2
	I0728 18:45:42.006182    6158 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:45:42.006187    6158 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:45:42.008632    6158 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:45:42.013204    6158 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:45:42.016275    6158 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:45:42.016296    6158 cni.go:84] Creating CNI manager for ""
	I0728 18:45:42.016311    6158 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0728 18:45:42.016348    6158 start.go:340] cluster config:
	{Name:old-k8s-version-260000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:45:42.020168    6158 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:45:42.028174    6158 out.go:177] * Starting "old-k8s-version-260000" primary control-plane node in "old-k8s-version-260000" cluster
	I0728 18:45:42.032182    6158 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0728 18:45:42.032198    6158 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0728 18:45:42.032210    6158 cache.go:56] Caching tarball of preloaded images
	I0728 18:45:42.032263    6158 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:45:42.032268    6158 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0728 18:45:42.032333    6158 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/old-k8s-version-260000/config.json ...
	I0728 18:45:42.032345    6158 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/old-k8s-version-260000/config.json: {Name:mk0800330deeabc47b8310b0c8354b555acf24a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:45:42.032567    6158 start.go:360] acquireMachinesLock for old-k8s-version-260000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:45:42.032603    6158 start.go:364] duration metric: took 28.959µs to acquireMachinesLock for "old-k8s-version-260000"
	I0728 18:45:42.032617    6158 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:45:42.032644    6158 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:45:42.040171    6158 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:45:42.057598    6158 start.go:159] libmachine.API.Create for "old-k8s-version-260000" (driver="qemu2")
	I0728 18:45:42.057631    6158 client.go:168] LocalClient.Create starting
	I0728 18:45:42.057693    6158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:45:42.057725    6158 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:42.057739    6158 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:42.057777    6158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:45:42.057801    6158 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:42.057811    6158 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:42.058246    6158 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:45:42.209676    6158 main.go:141] libmachine: Creating SSH key...
	I0728 18:45:42.292662    6158 main.go:141] libmachine: Creating Disk image...
	I0728 18:45:42.292673    6158 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:45:42.292890    6158 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/disk.qcow2
	I0728 18:45:42.302099    6158 main.go:141] libmachine: STDOUT: 
	I0728 18:45:42.302127    6158 main.go:141] libmachine: STDERR: 
	I0728 18:45:42.302187    6158 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/disk.qcow2 +20000M
	I0728 18:45:42.309983    6158 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:45:42.309994    6158 main.go:141] libmachine: STDERR: 
	I0728 18:45:42.310013    6158 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/disk.qcow2
	I0728 18:45:42.310018    6158 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:45:42.310031    6158 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:45:42.310058    6158 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:c0:26:2d:aa:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/disk.qcow2
	I0728 18:45:42.311730    6158 main.go:141] libmachine: STDOUT: 
	I0728 18:45:42.311744    6158 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:45:42.311762    6158 client.go:171] duration metric: took 254.128334ms to LocalClient.Create
	I0728 18:45:44.313857    6158 start.go:128] duration metric: took 2.281222375s to createHost
	I0728 18:45:44.313900    6158 start.go:83] releasing machines lock for "old-k8s-version-260000", held for 2.281308209s
	W0728 18:45:44.313951    6158 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:45:44.333244    6158 out.go:177] * Deleting "old-k8s-version-260000" in qemu2 ...
	W0728 18:45:44.355241    6158 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:45:44.355256    6158 start.go:729] Will try again in 5 seconds ...
	I0728 18:45:49.357483    6158 start.go:360] acquireMachinesLock for old-k8s-version-260000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:45:49.357971    6158 start.go:364] duration metric: took 367.208µs to acquireMachinesLock for "old-k8s-version-260000"
	I0728 18:45:49.358091    6158 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:45:49.358285    6158 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:45:49.366302    6158 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:45:49.405901    6158 start.go:159] libmachine.API.Create for "old-k8s-version-260000" (driver="qemu2")
	I0728 18:45:49.405957    6158 client.go:168] LocalClient.Create starting
	I0728 18:45:49.406062    6158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:45:49.406131    6158 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:49.406145    6158 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:49.406192    6158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:45:49.406231    6158 main.go:141] libmachine: Decoding PEM data...
	I0728 18:45:49.406244    6158 main.go:141] libmachine: Parsing certificate...
	I0728 18:45:49.406767    6158 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:45:49.562498    6158 main.go:141] libmachine: Creating SSH key...
	I0728 18:45:49.702334    6158 main.go:141] libmachine: Creating Disk image...
	I0728 18:45:49.702347    6158 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:45:49.702581    6158 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/disk.qcow2
	I0728 18:45:49.713095    6158 main.go:141] libmachine: STDOUT: 
	I0728 18:45:49.713113    6158 main.go:141] libmachine: STDERR: 
	I0728 18:45:49.713163    6158 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/disk.qcow2 +20000M
	I0728 18:45:49.721500    6158 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:45:49.721519    6158 main.go:141] libmachine: STDERR: 
	I0728 18:45:49.721530    6158 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/disk.qcow2
	I0728 18:45:49.721534    6158 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:45:49.721552    6158 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:45:49.721592    6158 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:e3:14:c6:78:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/disk.qcow2
	I0728 18:45:49.723337    6158 main.go:141] libmachine: STDOUT: 
	I0728 18:45:49.723351    6158 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:45:49.723363    6158 client.go:171] duration metric: took 317.403125ms to LocalClient.Create
	I0728 18:45:51.725451    6158 start.go:128] duration metric: took 2.367167125s to createHost
	I0728 18:45:51.725495    6158 start.go:83] releasing machines lock for "old-k8s-version-260000", held for 2.367527583s
	W0728 18:45:51.725705    6158 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-260000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-260000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:45:51.730323    6158 out.go:177] 
	W0728 18:45:51.734964    6158 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:45:51.734983    6158 out.go:239] * 
	* 
	W0728 18:45:51.735912    6158 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:45:51.752045    6158 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-260000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000: exit status 7 (37.553292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-260000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-260000 create -f testdata/busybox.yaml: exit status 1 (27.723459ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-260000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-260000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000: exit status 7 (28.423667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-260000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000: exit status 7 (28.019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-260000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-260000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-260000 describe deploy/metrics-server -n kube-system: exit status 1 (27.235416ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-260000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-260000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000: exit status 7 (28.305208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-260000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-260000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.218627583s)

                                                
                                                
-- stdout --
	* [old-k8s-version-260000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-260000" primary control-plane node in "old-k8s-version-260000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-260000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-260000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:45:55.899258    6218 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:45:55.899432    6218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:45:55.899437    6218 out.go:304] Setting ErrFile to fd 2...
	I0728 18:45:55.899439    6218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:45:55.899550    6218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:45:55.902211    6218 out.go:298] Setting JSON to false
	I0728 18:45:55.919171    6218 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4526,"bootTime":1722213029,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:45:55.919240    6218 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:45:55.927937    6218 out.go:177] * [old-k8s-version-260000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:45:55.931908    6218 notify.go:220] Checking for updates...
	I0728 18:45:55.935915    6218 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:45:55.942909    6218 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:45:55.952899    6218 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:45:55.963825    6218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:45:55.966913    6218 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:45:55.969912    6218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:45:55.973154    6218 config.go:182] Loaded profile config "old-k8s-version-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0728 18:45:55.975868    6218 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0728 18:45:55.979031    6218 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:45:55.986906    6218 out.go:177] * Using the qemu2 driver based on existing profile
	I0728 18:45:55.997888    6218 start.go:297] selected driver: qemu2
	I0728 18:45:55.997891    6218 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:45:55.997959    6218 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:45:56.000312    6218 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:45:56.000334    6218 cni.go:84] Creating CNI manager for ""
	I0728 18:45:56.000340    6218 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0728 18:45:56.000361    6218 start.go:340] cluster config:
	{Name:old-k8s-version-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-260000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:45:56.003722    6218 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:45:56.011876    6218 out.go:177] * Starting "old-k8s-version-260000" primary control-plane node in "old-k8s-version-260000" cluster
	I0728 18:45:56.015950    6218 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0728 18:45:56.015990    6218 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0728 18:45:56.016011    6218 cache.go:56] Caching tarball of preloaded images
	I0728 18:45:56.016104    6218 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:45:56.016111    6218 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0728 18:45:56.016176    6218 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/old-k8s-version-260000/config.json ...
	I0728 18:45:56.016583    6218 start.go:360] acquireMachinesLock for old-k8s-version-260000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:45:56.016622    6218 start.go:364] duration metric: took 30.167µs to acquireMachinesLock for "old-k8s-version-260000"
	I0728 18:45:56.016632    6218 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:45:56.016638    6218 fix.go:54] fixHost starting: 
	I0728 18:45:56.016756    6218 fix.go:112] recreateIfNeeded on old-k8s-version-260000: state=Stopped err=<nil>
	W0728 18:45:56.016765    6218 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:45:56.019910    6218 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-260000" ...
	I0728 18:45:56.026918    6218 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:45:56.026958    6218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:e3:14:c6:78:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/disk.qcow2
	I0728 18:45:56.028986    6218 main.go:141] libmachine: STDOUT: 
	I0728 18:45:56.029005    6218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:45:56.029035    6218 fix.go:56] duration metric: took 12.3985ms for fixHost
	I0728 18:45:56.029040    6218 start.go:83] releasing machines lock for "old-k8s-version-260000", held for 12.413334ms
	W0728 18:45:56.029048    6218 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:45:56.029102    6218 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:45:56.029107    6218 start.go:729] Will try again in 5 seconds ...
	I0728 18:46:01.029420    6218 start.go:360] acquireMachinesLock for old-k8s-version-260000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:01.029952    6218 start.go:364] duration metric: took 392.375µs to acquireMachinesLock for "old-k8s-version-260000"
	I0728 18:46:01.030108    6218 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:46:01.030130    6218 fix.go:54] fixHost starting: 
	I0728 18:46:01.030860    6218 fix.go:112] recreateIfNeeded on old-k8s-version-260000: state=Stopped err=<nil>
	W0728 18:46:01.030887    6218 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:46:01.037223    6218 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-260000" ...
	I0728 18:46:01.041540    6218 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:01.041758    6218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:e3:14:c6:78:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/old-k8s-version-260000/disk.qcow2
	I0728 18:46:01.051630    6218 main.go:141] libmachine: STDOUT: 
	I0728 18:46:01.051711    6218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:01.051825    6218 fix.go:56] duration metric: took 21.695417ms for fixHost
	I0728 18:46:01.051845    6218 start.go:83] releasing machines lock for "old-k8s-version-260000", held for 21.870542ms
	W0728 18:46:01.052089    6218 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-260000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-260000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:01.060529    6218 out.go:177] 
	W0728 18:46:01.064619    6218 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:46:01.064641    6218 out.go:239] * 
	* 
	W0728 18:46:01.067031    6218 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:46:01.076605    6218 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-260000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000: exit status 7 (65.44375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-260000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000: exit status 7 (32.455583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-260000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-260000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-260000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.238833ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-260000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-260000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000: exit status 7 (29.951292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-260000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000: exit status 7 (29.301375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-260000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-260000 --alsologtostderr -v=1: exit status 83 (42.11325ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-260000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-260000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:46:01.345688    6239 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:46:01.346072    6239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:01.346079    6239 out.go:304] Setting ErrFile to fd 2...
	I0728 18:46:01.346082    6239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:01.346215    6239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:46:01.346455    6239 out.go:298] Setting JSON to false
	I0728 18:46:01.346462    6239 mustload.go:65] Loading cluster: old-k8s-version-260000
	I0728 18:46:01.346654    6239 config.go:182] Loaded profile config "old-k8s-version-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0728 18:46:01.350809    6239 out.go:177] * The control-plane node old-k8s-version-260000 host is not running: state=Stopped
	I0728 18:46:01.354738    6239 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-260000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-260000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000: exit status 7 (28.08925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-260000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000: exit status 7 (29.234208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-933000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-933000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.769572791s)

                                                
                                                
-- stdout --
	* [no-preload-933000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-933000" primary control-plane node in "no-preload-933000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-933000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:46:01.671954    6256 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:46:01.672106    6256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:01.672110    6256 out.go:304] Setting ErrFile to fd 2...
	I0728 18:46:01.672112    6256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:01.672266    6256 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:46:01.673329    6256 out.go:298] Setting JSON to false
	I0728 18:46:01.689468    6256 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4532,"bootTime":1722213029,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:46:01.689535    6256 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:46:01.695009    6256 out.go:177] * [no-preload-933000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:46:01.705924    6256 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:46:01.705966    6256 notify.go:220] Checking for updates...
	I0728 18:46:01.712954    6256 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:46:01.715918    6256 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:46:01.718974    6256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:46:01.721963    6256 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:46:01.724936    6256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:46:01.728220    6256 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:01.728285    6256 config.go:182] Loaded profile config "stopped-upgrade-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0728 18:46:01.728336    6256 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:46:01.731952    6256 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:46:01.738916    6256 start.go:297] selected driver: qemu2
	I0728 18:46:01.738922    6256 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:46:01.738928    6256 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:46:01.741055    6256 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:46:01.743903    6256 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:46:01.747911    6256 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:46:01.747955    6256 cni.go:84] Creating CNI manager for ""
	I0728 18:46:01.747964    6256 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:46:01.747968    6256 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 18:46:01.748005    6256 start.go:340] cluster config:
	{Name:no-preload-933000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-933000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:46:01.751354    6256 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:01.759762    6256 out.go:177] * Starting "no-preload-933000" primary control-plane node in "no-preload-933000" cluster
	I0728 18:46:01.763884    6256 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0728 18:46:01.763949    6256 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/no-preload-933000/config.json ...
	I0728 18:46:01.763966    6256 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/no-preload-933000/config.json: {Name:mkad4fe46fac9835b9e9ea2bb67e36a9c67fa776 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:46:01.763961    6256 cache.go:107] acquiring lock: {Name:mk7b1b69c1606f1420fea70fdfc405dc8ede5ad8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:01.764037    6256 cache.go:115] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0728 18:46:01.764046    6256 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 86.458µs
	I0728 18:46:01.764052    6256 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0728 18:46:01.764058    6256 cache.go:107] acquiring lock: {Name:mk8c5ec3e1e2369b324e77bb608a427c2affc704 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:01.764072    6256 cache.go:107] acquiring lock: {Name:mk696b624ef407ff9f380c60fc44579b94373fe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:01.764126    6256 cache.go:107] acquiring lock: {Name:mk5f4deeba92c67f1b86d0b48d86406a78d9a2cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:01.764151    6256 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0728 18:46:01.764205    6256 start.go:360] acquireMachinesLock for no-preload-933000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:01.764203    6256 cache.go:107] acquiring lock: {Name:mk40db02310ed29dfed0ffb10fef60eebba55cdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:01.764243    6256 start.go:364] duration metric: took 32.958µs to acquireMachinesLock for "no-preload-933000"
	I0728 18:46:01.764241    6256 cache.go:107] acquiring lock: {Name:mk577c42188ace3d90afc9161049bad482ff2f23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:01.764260    6256 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0728 18:46:01.764256    6256 start.go:93] Provisioning new machine with config: &{Name:no-preload-933000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-933000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:46:01.764307    6256 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:46:01.764108    6256 cache.go:107] acquiring lock: {Name:mk903007b3e8f3d3fab693e55a68596345a0fedb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:01.764074    6256 cache.go:107] acquiring lock: {Name:mk61bc7604cee7e45e71c138ef0662069ef96daa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:01.764383    6256 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0728 18:46:01.764979    6256 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0728 18:46:01.765013    6256 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0728 18:46:01.765024    6256 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0728 18:46:01.765031    6256 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0728 18:46:01.771840    6256 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:46:01.780927    6256 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0728 18:46:01.780946    6256 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0728 18:46:01.781001    6256 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0728 18:46:01.781048    6256 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0728 18:46:01.781102    6256 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0728 18:46:01.781548    6256 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0728 18:46:01.781581    6256 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0728 18:46:01.787361    6256 start.go:159] libmachine.API.Create for "no-preload-933000" (driver="qemu2")
	I0728 18:46:01.787380    6256 client.go:168] LocalClient.Create starting
	I0728 18:46:01.787462    6256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:46:01.787495    6256 main.go:141] libmachine: Decoding PEM data...
	I0728 18:46:01.787502    6256 main.go:141] libmachine: Parsing certificate...
	I0728 18:46:01.787537    6256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:46:01.787559    6256 main.go:141] libmachine: Decoding PEM data...
	I0728 18:46:01.787569    6256 main.go:141] libmachine: Parsing certificate...
	I0728 18:46:01.787959    6256 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:46:01.940308    6256 main.go:141] libmachine: Creating SSH key...
	I0728 18:46:02.018049    6256 main.go:141] libmachine: Creating Disk image...
	I0728 18:46:02.018070    6256 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:46:02.018342    6256 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/disk.qcow2
	I0728 18:46:02.028488    6256 main.go:141] libmachine: STDOUT: 
	I0728 18:46:02.028512    6256 main.go:141] libmachine: STDERR: 
	I0728 18:46:02.028567    6256 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/disk.qcow2 +20000M
	I0728 18:46:02.037873    6256 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:46:02.037898    6256 main.go:141] libmachine: STDERR: 
	I0728 18:46:02.037912    6256 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/disk.qcow2
	I0728 18:46:02.037918    6256 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:46:02.037927    6256 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:02.037957    6256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:42:e3:d5:77:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/disk.qcow2
	I0728 18:46:02.040106    6256 main.go:141] libmachine: STDOUT: 
	I0728 18:46:02.040125    6256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:02.040145    6256 client.go:171] duration metric: took 252.762958ms to LocalClient.Create
	I0728 18:46:02.133711    6256 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0728 18:46:02.154160    6256 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0728 18:46:02.170746    6256 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0728 18:46:02.204334    6256 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0728 18:46:02.260531    6256 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0728 18:46:02.264904    6256 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0728 18:46:02.274414    6256 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0728 18:46:02.349974    6256 cache.go:157] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0728 18:46:02.349991    6256 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 585.836792ms
	I0728 18:46:02.350001    6256 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0728 18:46:04.040706    6256 start.go:128] duration metric: took 2.276410333s to createHost
	I0728 18:46:04.040729    6256 start.go:83] releasing machines lock for "no-preload-933000", held for 2.276504083s
	W0728 18:46:04.040751    6256 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:04.050589    6256 out.go:177] * Deleting "no-preload-933000" in qemu2 ...
	W0728 18:46:04.064058    6256 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:04.064070    6256 start.go:729] Will try again in 5 seconds ...
	I0728 18:46:04.933824    6256 cache.go:157] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0728 18:46:04.933856    6256 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.169796667s
	I0728 18:46:04.933867    6256 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0728 18:46:05.334918    6256 cache.go:157] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0728 18:46:05.334948    6256 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.570877583s
	I0728 18:46:05.334964    6256 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0728 18:46:05.503235    6256 cache.go:157] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0728 18:46:05.503257    6256 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 3.739235541s
	I0728 18:46:05.503270    6256 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0728 18:46:06.731901    6256 cache.go:157] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0728 18:46:06.731950    6256 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.967942542s
	I0728 18:46:06.731980    6256 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0728 18:46:07.177031    6256 cache.go:157] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0728 18:46:07.177051    6256 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 5.413002666s
	I0728 18:46:07.177063    6256 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0728 18:46:09.064395    6256 start.go:360] acquireMachinesLock for no-preload-933000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:09.064889    6256 start.go:364] duration metric: took 411.833µs to acquireMachinesLock for "no-preload-933000"
	I0728 18:46:09.065006    6256 start.go:93] Provisioning new machine with config: &{Name:no-preload-933000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-933000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:46:09.065247    6256 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:46:09.074769    6256 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:46:09.124029    6256 start.go:159] libmachine.API.Create for "no-preload-933000" (driver="qemu2")
	I0728 18:46:09.124076    6256 client.go:168] LocalClient.Create starting
	I0728 18:46:09.124199    6256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:46:09.124264    6256 main.go:141] libmachine: Decoding PEM data...
	I0728 18:46:09.124285    6256 main.go:141] libmachine: Parsing certificate...
	I0728 18:46:09.124365    6256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:46:09.124411    6256 main.go:141] libmachine: Decoding PEM data...
	I0728 18:46:09.124442    6256 main.go:141] libmachine: Parsing certificate...
	I0728 18:46:09.124970    6256 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:46:09.291241    6256 main.go:141] libmachine: Creating SSH key...
	I0728 18:46:09.352810    6256 main.go:141] libmachine: Creating Disk image...
	I0728 18:46:09.352816    6256 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:46:09.353015    6256 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/disk.qcow2
	I0728 18:46:09.362241    6256 main.go:141] libmachine: STDOUT: 
	I0728 18:46:09.362259    6256 main.go:141] libmachine: STDERR: 
	I0728 18:46:09.362312    6256 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/disk.qcow2 +20000M
	I0728 18:46:09.370413    6256 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:46:09.370426    6256 main.go:141] libmachine: STDERR: 
	I0728 18:46:09.370440    6256 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/disk.qcow2
	I0728 18:46:09.370450    6256 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:46:09.370460    6256 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:09.370492    6256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:35:fb:15:56:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/disk.qcow2
	I0728 18:46:09.372184    6256 main.go:141] libmachine: STDOUT: 
	I0728 18:46:09.372210    6256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:09.372226    6256 client.go:171] duration metric: took 248.146584ms to LocalClient.Create
	I0728 18:46:10.767323    6256 cache.go:157] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0728 18:46:10.767356    6256 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 9.003391875s
	I0728 18:46:10.767372    6256 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0728 18:46:10.767395    6256 cache.go:87] Successfully saved all images to host disk.
	I0728 18:46:11.374447    6256 start.go:128] duration metric: took 2.309188375s to createHost
	I0728 18:46:11.374543    6256 start.go:83] releasing machines lock for "no-preload-933000", held for 2.309654417s
	W0728 18:46:11.374778    6256 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-933000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-933000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:11.385074    6256 out.go:177] 
	W0728 18:46:11.389187    6256 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:46:11.389204    6256 out.go:239] * 
	* 
	W0728 18:46:11.391082    6256 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:46:11.400024    6256 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-933000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000: exit status 7 (58.252625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-933000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-933000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-933000 create -f testdata/busybox.yaml: exit status 1 (29.469875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-933000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-933000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000: exit status 7 (30.946208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-933000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000: exit status 7 (29.435458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-933000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-933000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-933000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-933000 describe deploy/metrics-server -n kube-system: exit status 1 (27.087417ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-933000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-933000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000: exit status 7 (29.246083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-933000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-593000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-593000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.969573459s)

                                                
                                                
-- stdout --
	* [embed-certs-593000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-593000" primary control-plane node in "embed-certs-593000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-593000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:46:12.112101    6319 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:46:12.112205    6319 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:12.112209    6319 out.go:304] Setting ErrFile to fd 2...
	I0728 18:46:12.112211    6319 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:12.112346    6319 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:46:12.113401    6319 out.go:298] Setting JSON to false
	I0728 18:46:12.129527    6319 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4543,"bootTime":1722213029,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:46:12.129621    6319 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:46:12.134636    6319 out.go:177] * [embed-certs-593000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:46:12.141672    6319 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:46:12.141737    6319 notify.go:220] Checking for updates...
	I0728 18:46:12.149616    6319 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:46:12.152565    6319 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:46:12.155616    6319 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:46:12.158608    6319 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:46:12.159930    6319 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:46:12.162958    6319 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:12.163027    6319 config.go:182] Loaded profile config "no-preload-933000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0728 18:46:12.163078    6319 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:46:12.167605    6319 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:46:12.172578    6319 start.go:297] selected driver: qemu2
	I0728 18:46:12.172584    6319 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:46:12.172590    6319 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:46:12.174851    6319 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:46:12.178613    6319 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:46:12.180074    6319 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:46:12.180087    6319 cni.go:84] Creating CNI manager for ""
	I0728 18:46:12.180094    6319 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:46:12.180097    6319 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 18:46:12.180123    6319 start.go:340] cluster config:
	{Name:embed-certs-593000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:46:12.183963    6319 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:12.191635    6319 out.go:177] * Starting "embed-certs-593000" primary control-plane node in "embed-certs-593000" cluster
	I0728 18:46:12.195568    6319 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:46:12.195581    6319 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:46:12.195593    6319 cache.go:56] Caching tarball of preloaded images
	I0728 18:46:12.195650    6319 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:46:12.195655    6319 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:46:12.195710    6319 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/embed-certs-593000/config.json ...
	I0728 18:46:12.195721    6319 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/embed-certs-593000/config.json: {Name:mk8aa193450052eaa1ede6e79f92f9f72a7cfdf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:46:12.196122    6319 start.go:360] acquireMachinesLock for embed-certs-593000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:12.196155    6319 start.go:364] duration metric: took 27.334µs to acquireMachinesLock for "embed-certs-593000"
	I0728 18:46:12.196169    6319 start.go:93] Provisioning new machine with config: &{Name:embed-certs-593000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:46:12.196214    6319 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:46:12.204565    6319 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:46:12.222381    6319 start.go:159] libmachine.API.Create for "embed-certs-593000" (driver="qemu2")
	I0728 18:46:12.222410    6319 client.go:168] LocalClient.Create starting
	I0728 18:46:12.222461    6319 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:46:12.222489    6319 main.go:141] libmachine: Decoding PEM data...
	I0728 18:46:12.222498    6319 main.go:141] libmachine: Parsing certificate...
	I0728 18:46:12.222538    6319 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:46:12.222560    6319 main.go:141] libmachine: Decoding PEM data...
	I0728 18:46:12.222568    6319 main.go:141] libmachine: Parsing certificate...
	I0728 18:46:12.223068    6319 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:46:12.378064    6319 main.go:141] libmachine: Creating SSH key...
	I0728 18:46:12.540105    6319 main.go:141] libmachine: Creating Disk image...
	I0728 18:46:12.540111    6319 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:46:12.540340    6319 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/disk.qcow2
	I0728 18:46:12.549796    6319 main.go:141] libmachine: STDOUT: 
	I0728 18:46:12.549812    6319 main.go:141] libmachine: STDERR: 
	I0728 18:46:12.549864    6319 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/disk.qcow2 +20000M
	I0728 18:46:12.558240    6319 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:46:12.558264    6319 main.go:141] libmachine: STDERR: 
	I0728 18:46:12.558280    6319 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/disk.qcow2
	I0728 18:46:12.558284    6319 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:46:12.558297    6319 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:12.558326    6319 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:f0:df:3c:32:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/disk.qcow2
	I0728 18:46:12.560071    6319 main.go:141] libmachine: STDOUT: 
	I0728 18:46:12.560089    6319 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:12.560114    6319 client.go:171] duration metric: took 337.701958ms to LocalClient.Create
	I0728 18:46:14.562288    6319 start.go:128] duration metric: took 2.366075167s to createHost
	I0728 18:46:14.562380    6319 start.go:83] releasing machines lock for "embed-certs-593000", held for 2.366238667s
	W0728 18:46:14.562474    6319 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:14.581833    6319 out.go:177] * Deleting "embed-certs-593000" in qemu2 ...
	W0728 18:46:14.611203    6319 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:14.611228    6319 start.go:729] Will try again in 5 seconds ...
	I0728 18:46:19.613296    6319 start.go:360] acquireMachinesLock for embed-certs-593000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:19.628013    6319 start.go:364] duration metric: took 14.613083ms to acquireMachinesLock for "embed-certs-593000"
	I0728 18:46:19.628070    6319 start.go:93] Provisioning new machine with config: &{Name:embed-certs-593000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:46:19.628342    6319 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:46:19.640803    6319 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:46:19.688292    6319 start.go:159] libmachine.API.Create for "embed-certs-593000" (driver="qemu2")
	I0728 18:46:19.688381    6319 client.go:168] LocalClient.Create starting
	I0728 18:46:19.688544    6319 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:46:19.688623    6319 main.go:141] libmachine: Decoding PEM data...
	I0728 18:46:19.688640    6319 main.go:141] libmachine: Parsing certificate...
	I0728 18:46:19.688704    6319 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:46:19.688748    6319 main.go:141] libmachine: Decoding PEM data...
	I0728 18:46:19.688769    6319 main.go:141] libmachine: Parsing certificate...
	I0728 18:46:19.689322    6319 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:46:19.848568    6319 main.go:141] libmachine: Creating SSH key...
	I0728 18:46:19.987722    6319 main.go:141] libmachine: Creating Disk image...
	I0728 18:46:19.987731    6319 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:46:19.987931    6319 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/disk.qcow2
	I0728 18:46:19.998299    6319 main.go:141] libmachine: STDOUT: 
	I0728 18:46:19.998322    6319 main.go:141] libmachine: STDERR: 
	I0728 18:46:19.998377    6319 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/disk.qcow2 +20000M
	I0728 18:46:20.007424    6319 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:46:20.007445    6319 main.go:141] libmachine: STDERR: 
	I0728 18:46:20.007459    6319 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/disk.qcow2
	I0728 18:46:20.007475    6319 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:46:20.007488    6319 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:20.007514    6319 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:41:ba:02:7b:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/disk.qcow2
	I0728 18:46:20.009410    6319 main.go:141] libmachine: STDOUT: 
	I0728 18:46:20.009430    6319 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:20.009445    6319 client.go:171] duration metric: took 321.042208ms to LocalClient.Create
	I0728 18:46:22.011680    6319 start.go:128] duration metric: took 2.38332925s to createHost
	I0728 18:46:22.011756    6319 start.go:83] releasing machines lock for "embed-certs-593000", held for 2.383736959s
	W0728 18:46:22.012071    6319 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-593000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-593000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:22.025634    6319 out.go:177] 
	W0728 18:46:22.028772    6319 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:46:22.028803    6319 out.go:239] * 
	* 
	W0728 18:46:22.031163    6319 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:46:22.040642    6319 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-593000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000: exit status 7 (47.932792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-933000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-933000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.790467208s)

                                                
                                                
-- stdout --
	* [no-preload-933000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-933000" primary control-plane node in "no-preload-933000" cluster
	* Restarting existing qemu2 VM for "no-preload-933000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-933000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:46:13.902910    6339 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:46:13.903034    6339 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:13.903042    6339 out.go:304] Setting ErrFile to fd 2...
	I0728 18:46:13.903052    6339 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:13.903193    6339 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:46:13.904178    6339 out.go:298] Setting JSON to false
	I0728 18:46:13.920194    6339 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4544,"bootTime":1722213029,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:46:13.920282    6339 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:46:13.925509    6339 out.go:177] * [no-preload-933000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:46:13.931487    6339 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:46:13.931532    6339 notify.go:220] Checking for updates...
	I0728 18:46:13.938483    6339 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:46:13.941511    6339 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:46:13.944540    6339 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:46:13.947447    6339 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:46:13.950460    6339 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:46:13.953772    6339 config.go:182] Loaded profile config "no-preload-933000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0728 18:46:13.954031    6339 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:46:13.958465    6339 out.go:177] * Using the qemu2 driver based on existing profile
	I0728 18:46:13.965510    6339 start.go:297] selected driver: qemu2
	I0728 18:46:13.965516    6339 start.go:901] validating driver "qemu2" against &{Name:no-preload-933000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-933000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:46:13.965596    6339 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:46:13.967966    6339 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:46:13.967991    6339 cni.go:84] Creating CNI manager for ""
	I0728 18:46:13.967999    6339 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:46:13.968025    6339 start.go:340] cluster config:
	{Name:no-preload-933000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-933000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:46:13.971585    6339 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:13.980510    6339 out.go:177] * Starting "no-preload-933000" primary control-plane node in "no-preload-933000" cluster
	I0728 18:46:13.984449    6339 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0728 18:46:13.984502    6339 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/no-preload-933000/config.json ...
	I0728 18:46:13.984537    6339 cache.go:107] acquiring lock: {Name:mk7b1b69c1606f1420fea70fdfc405dc8ede5ad8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:13.984534    6339 cache.go:107] acquiring lock: {Name:mk5f4deeba92c67f1b86d0b48d86406a78d9a2cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:13.984565    6339 cache.go:107] acquiring lock: {Name:mk40db02310ed29dfed0ffb10fef60eebba55cdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:13.984593    6339 cache.go:115] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0728 18:46:13.984602    6339 cache.go:115] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0728 18:46:13.984607    6339 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 77.416µs
	I0728 18:46:13.984613    6339 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0728 18:46:13.984597    6339 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 65.625µs
	I0728 18:46:13.984616    6339 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0728 18:46:13.984609    6339 cache.go:107] acquiring lock: {Name:mk696b624ef407ff9f380c60fc44579b94373fe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:13.984620    6339 cache.go:115] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0728 18:46:13.984625    6339 cache.go:107] acquiring lock: {Name:mk903007b3e8f3d3fab693e55a68596345a0fedb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:13.984625    6339 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 75µs
	I0728 18:46:13.984644    6339 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0728 18:46:13.984649    6339 cache.go:115] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0728 18:46:13.984618    6339 cache.go:107] acquiring lock: {Name:mk577c42188ace3d90afc9161049bad482ff2f23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:13.984662    6339 cache.go:115] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0728 18:46:13.984668    6339 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 59.333µs
	I0728 18:46:13.984703    6339 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0728 18:46:13.984700    6339 cache.go:115] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0728 18:46:13.984697    6339 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 57.25µs
	I0728 18:46:13.984707    6339 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0728 18:46:13.984707    6339 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 89.875µs
	I0728 18:46:13.984710    6339 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0728 18:46:13.984734    6339 cache.go:107] acquiring lock: {Name:mk61bc7604cee7e45e71c138ef0662069ef96daa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:13.984736    6339 cache.go:107] acquiring lock: {Name:mk8c5ec3e1e2369b324e77bb608a427c2affc704 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:13.984792    6339 cache.go:115] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0728 18:46:13.984796    6339 cache.go:115] /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0728 18:46:13.984797    6339 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 116.708µs
	I0728 18:46:13.984800    6339 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 119.791µs
	I0728 18:46:13.984803    6339 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0728 18:46:13.984804    6339 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0728 18:46:13.984810    6339 cache.go:87] Successfully saved all images to host disk.
	I0728 18:46:13.984926    6339 start.go:360] acquireMachinesLock for no-preload-933000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:14.562538    6339 start.go:364] duration metric: took 577.581792ms to acquireMachinesLock for "no-preload-933000"
	I0728 18:46:14.562644    6339 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:46:14.562674    6339 fix.go:54] fixHost starting: 
	I0728 18:46:14.564188    6339 fix.go:112] recreateIfNeeded on no-preload-933000: state=Stopped err=<nil>
	W0728 18:46:14.564232    6339 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:46:14.572795    6339 out.go:177] * Restarting existing qemu2 VM for "no-preload-933000" ...
	I0728 18:46:14.585822    6339 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:14.586018    6339 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:35:fb:15:56:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/disk.qcow2
	I0728 18:46:14.595577    6339 main.go:141] libmachine: STDOUT: 
	I0728 18:46:14.595774    6339 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:14.595890    6339 fix.go:56] duration metric: took 33.214583ms for fixHost
	I0728 18:46:14.595908    6339 start.go:83] releasing machines lock for "no-preload-933000", held for 33.340208ms
	W0728 18:46:14.595928    6339 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:46:14.596079    6339 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:14.596095    6339 start.go:729] Will try again in 5 seconds ...
	I0728 18:46:19.598205    6339 start.go:360] acquireMachinesLock for no-preload-933000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:19.598602    6339 start.go:364] duration metric: took 332.125µs to acquireMachinesLock for "no-preload-933000"
	I0728 18:46:19.598725    6339 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:46:19.598748    6339 fix.go:54] fixHost starting: 
	I0728 18:46:19.599486    6339 fix.go:112] recreateIfNeeded on no-preload-933000: state=Stopped err=<nil>
	W0728 18:46:19.599512    6339 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:46:19.610859    6339 out.go:177] * Restarting existing qemu2 VM for "no-preload-933000" ...
	I0728 18:46:19.617920    6339 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:19.618094    6339 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:35:fb:15:56:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/no-preload-933000/disk.qcow2
	I0728 18:46:19.627699    6339 main.go:141] libmachine: STDOUT: 
	I0728 18:46:19.627805    6339 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:19.627907    6339 fix.go:56] duration metric: took 29.161292ms for fixHost
	I0728 18:46:19.627929    6339 start.go:83] releasing machines lock for "no-preload-933000", held for 29.304958ms
	W0728 18:46:19.628115    6339 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-933000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-933000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:19.643879    6339 out.go:177] 
	W0728 18:46:19.648010    6339 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:46:19.648040    6339 out.go:239] * 
	* 
	W0728 18:46:19.650366    6339 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:46:19.656912    6339 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-933000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000: exit status 7 (49.843958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-933000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-933000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000: exit status 7 (33.393833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-933000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-933000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-933000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-933000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (31.347625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-933000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-933000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000: exit status 7 (33.312042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-933000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-933000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000: exit status 7 (30.057959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-933000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-933000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-933000 --alsologtostderr -v=1: exit status 83 (43.565042ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-933000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-933000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:46:19.925854    6359 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:46:19.925991    6359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:19.925995    6359 out.go:304] Setting ErrFile to fd 2...
	I0728 18:46:19.925997    6359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:19.926147    6359 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:46:19.926379    6359 out.go:298] Setting JSON to false
	I0728 18:46:19.926386    6359 mustload.go:65] Loading cluster: no-preload-933000
	I0728 18:46:19.926577    6359 config.go:182] Loaded profile config "no-preload-933000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0728 18:46:19.929893    6359 out.go:177] * The control-plane node no-preload-933000 host is not running: state=Stopped
	I0728 18:46:19.933904    6359 out.go:177]   To start a cluster, run: "minikube start -p no-preload-933000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-933000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000: exit status 7 (28.748417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-933000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000: exit status 7 (29.371583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-933000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-860000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-860000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (11.496061125s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-860000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-860000" primary control-plane node in "default-k8s-diff-port-860000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-860000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:46:20.345272    6386 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:46:20.345403    6386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:20.345407    6386 out.go:304] Setting ErrFile to fd 2...
	I0728 18:46:20.345409    6386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:20.345536    6386 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:46:20.346643    6386 out.go:298] Setting JSON to false
	I0728 18:46:20.362583    6386 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4551,"bootTime":1722213029,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:46:20.362654    6386 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:46:20.367965    6386 out.go:177] * [default-k8s-diff-port-860000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:46:20.373924    6386 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:46:20.373967    6386 notify.go:220] Checking for updates...
	I0728 18:46:20.380889    6386 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:46:20.383956    6386 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:46:20.386914    6386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:46:20.389877    6386 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:46:20.392931    6386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:46:20.396197    6386 config.go:182] Loaded profile config "embed-certs-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:20.396260    6386 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:20.396318    6386 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:46:20.399874    6386 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:46:20.406920    6386 start.go:297] selected driver: qemu2
	I0728 18:46:20.406927    6386 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:46:20.406933    6386 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:46:20.409087    6386 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 18:46:20.410714    6386 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:46:20.414002    6386 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:46:20.414041    6386 cni.go:84] Creating CNI manager for ""
	I0728 18:46:20.414050    6386 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:46:20.414055    6386 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 18:46:20.414089    6386 start.go:340] cluster config:
	{Name:default-k8s-diff-port-860000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:46:20.417835    6386 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:20.426883    6386 out.go:177] * Starting "default-k8s-diff-port-860000" primary control-plane node in "default-k8s-diff-port-860000" cluster
	I0728 18:46:20.430887    6386 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:46:20.430904    6386 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:46:20.430916    6386 cache.go:56] Caching tarball of preloaded images
	I0728 18:46:20.430984    6386 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:46:20.430991    6386 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:46:20.431065    6386 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/default-k8s-diff-port-860000/config.json ...
	I0728 18:46:20.431096    6386 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/default-k8s-diff-port-860000/config.json: {Name:mk7fc90da4b36353d6dab1c1d18b6825b41d7baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:46:20.431309    6386 start.go:360] acquireMachinesLock for default-k8s-diff-port-860000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:22.011889    6386 start.go:364] duration metric: took 1.580543583s to acquireMachinesLock for "default-k8s-diff-port-860000"
	I0728 18:46:22.012058    6386 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:46:22.012293    6386 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:46:22.021707    6386 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:46:22.069761    6386 start.go:159] libmachine.API.Create for "default-k8s-diff-port-860000" (driver="qemu2")
	I0728 18:46:22.069810    6386 client.go:168] LocalClient.Create starting
	I0728 18:46:22.069966    6386 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:46:22.070028    6386 main.go:141] libmachine: Decoding PEM data...
	I0728 18:46:22.070042    6386 main.go:141] libmachine: Parsing certificate...
	I0728 18:46:22.070108    6386 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:46:22.070151    6386 main.go:141] libmachine: Decoding PEM data...
	I0728 18:46:22.070165    6386 main.go:141] libmachine: Parsing certificate...
	I0728 18:46:22.070793    6386 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:46:22.230048    6386 main.go:141] libmachine: Creating SSH key...
	I0728 18:46:22.322819    6386 main.go:141] libmachine: Creating Disk image...
	I0728 18:46:22.322831    6386 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:46:22.323066    6386 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/disk.qcow2
	I0728 18:46:22.332843    6386 main.go:141] libmachine: STDOUT: 
	I0728 18:46:22.332877    6386 main.go:141] libmachine: STDERR: 
	I0728 18:46:22.332951    6386 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/disk.qcow2 +20000M
	I0728 18:46:22.341367    6386 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:46:22.341384    6386 main.go:141] libmachine: STDERR: 
	I0728 18:46:22.341417    6386 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/disk.qcow2
	I0728 18:46:22.341422    6386 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:46:22.341435    6386 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:22.341466    6386 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:91:a5:b3:7f:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/disk.qcow2
	I0728 18:46:22.343360    6386 main.go:141] libmachine: STDOUT: 
	I0728 18:46:22.343376    6386 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:22.343396    6386 client.go:171] duration metric: took 273.583958ms to LocalClient.Create
	I0728 18:46:24.345583    6386 start.go:128] duration metric: took 2.33327925s to createHost
	I0728 18:46:24.345702    6386 start.go:83] releasing machines lock for "default-k8s-diff-port-860000", held for 2.333768625s
	W0728 18:46:24.345748    6386 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:24.355731    6386 out.go:177] * Deleting "default-k8s-diff-port-860000" in qemu2 ...
	W0728 18:46:24.380495    6386 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:24.380524    6386 start.go:729] Will try again in 5 seconds ...
	I0728 18:46:29.382780    6386 start.go:360] acquireMachinesLock for default-k8s-diff-port-860000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:29.383226    6386 start.go:364] duration metric: took 314.625µs to acquireMachinesLock for "default-k8s-diff-port-860000"
	I0728 18:46:29.383355    6386 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:46:29.383724    6386 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:46:29.393098    6386 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:46:29.446684    6386 start.go:159] libmachine.API.Create for "default-k8s-diff-port-860000" (driver="qemu2")
	I0728 18:46:29.446731    6386 client.go:168] LocalClient.Create starting
	I0728 18:46:29.446853    6386 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:46:29.446921    6386 main.go:141] libmachine: Decoding PEM data...
	I0728 18:46:29.446936    6386 main.go:141] libmachine: Parsing certificate...
	I0728 18:46:29.447005    6386 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:46:29.447059    6386 main.go:141] libmachine: Decoding PEM data...
	I0728 18:46:29.447077    6386 main.go:141] libmachine: Parsing certificate...
	I0728 18:46:29.447603    6386 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:46:29.605618    6386 main.go:141] libmachine: Creating SSH key...
	I0728 18:46:29.733705    6386 main.go:141] libmachine: Creating Disk image...
	I0728 18:46:29.733712    6386 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:46:29.733912    6386 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/disk.qcow2
	I0728 18:46:29.743292    6386 main.go:141] libmachine: STDOUT: 
	I0728 18:46:29.743314    6386 main.go:141] libmachine: STDERR: 
	I0728 18:46:29.743366    6386 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/disk.qcow2 +20000M
	I0728 18:46:29.751216    6386 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:46:29.751231    6386 main.go:141] libmachine: STDERR: 
	I0728 18:46:29.751245    6386 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/disk.qcow2
	I0728 18:46:29.751257    6386 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:46:29.751267    6386 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:29.751293    6386 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:5b:dc:02:00:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/disk.qcow2
	I0728 18:46:29.752928    6386 main.go:141] libmachine: STDOUT: 
	I0728 18:46:29.752946    6386 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:29.752961    6386 client.go:171] duration metric: took 306.22725ms to LocalClient.Create
	I0728 18:46:31.755170    6386 start.go:128] duration metric: took 2.371446583s to createHost
	I0728 18:46:31.755259    6386 start.go:83] releasing machines lock for "default-k8s-diff-port-860000", held for 2.372033167s
	W0728 18:46:31.755617    6386 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-860000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-860000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:31.764963    6386 out.go:177] 
	W0728 18:46:31.777229    6386 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:46:31.777281    6386 out.go:239] * 
	* 
	W0728 18:46:31.780351    6386 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:46:31.793150    6386 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-860000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000: exit status 7 (62.89225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-593000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-593000 create -f testdata/busybox.yaml: exit status 1 (30.580583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-593000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-593000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000: exit status 7 (33.792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-593000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000: exit status 7 (32.790417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-593000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-593000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-593000 describe deploy/metrics-server -n kube-system: exit status 1 (27.643292ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-593000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-593000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000: exit status 7 (29.155708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-593000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-593000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.756176375s)

                                                
                                                
-- stdout --
	* [embed-certs-593000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-593000" primary control-plane node in "embed-certs-593000" cluster
	* Restarting existing qemu2 VM for "embed-certs-593000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-593000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:46:26.100494    6433 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:46:26.100623    6433 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:26.100626    6433 out.go:304] Setting ErrFile to fd 2...
	I0728 18:46:26.100628    6433 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:26.100742    6433 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:46:26.101677    6433 out.go:298] Setting JSON to false
	I0728 18:46:26.117411    6433 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4557,"bootTime":1722213029,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:46:26.117483    6433 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:46:26.121886    6433 out.go:177] * [embed-certs-593000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:46:26.128850    6433 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:46:26.128939    6433 notify.go:220] Checking for updates...
	I0728 18:46:26.135888    6433 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:46:26.139860    6433 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:46:26.143951    6433 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:46:26.146860    6433 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:46:26.149855    6433 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:46:26.153066    6433 config.go:182] Loaded profile config "embed-certs-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:26.153330    6433 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:46:26.156856    6433 out.go:177] * Using the qemu2 driver based on existing profile
	I0728 18:46:26.163802    6433 start.go:297] selected driver: qemu2
	I0728 18:46:26.163807    6433 start.go:901] validating driver "qemu2" against &{Name:embed-certs-593000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:46:26.163858    6433 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:46:26.166039    6433 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:46:26.166084    6433 cni.go:84] Creating CNI manager for ""
	I0728 18:46:26.166093    6433 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:46:26.166123    6433 start.go:340] cluster config:
	{Name:embed-certs-593000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-593000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:46:26.169696    6433 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:26.176849    6433 out.go:177] * Starting "embed-certs-593000" primary control-plane node in "embed-certs-593000" cluster
	I0728 18:46:26.180811    6433 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:46:26.180828    6433 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:46:26.180838    6433 cache.go:56] Caching tarball of preloaded images
	I0728 18:46:26.180895    6433 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:46:26.180901    6433 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:46:26.180952    6433 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/embed-certs-593000/config.json ...
	I0728 18:46:26.181445    6433 start.go:360] acquireMachinesLock for embed-certs-593000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:26.181484    6433 start.go:364] duration metric: took 32.083µs to acquireMachinesLock for "embed-certs-593000"
	I0728 18:46:26.181494    6433 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:46:26.181499    6433 fix.go:54] fixHost starting: 
	I0728 18:46:26.181616    6433 fix.go:112] recreateIfNeeded on embed-certs-593000: state=Stopped err=<nil>
	W0728 18:46:26.181624    6433 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:46:26.189834    6433 out.go:177] * Restarting existing qemu2 VM for "embed-certs-593000" ...
	I0728 18:46:26.193819    6433 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:26.193859    6433 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:41:ba:02:7b:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/disk.qcow2
	I0728 18:46:26.195847    6433 main.go:141] libmachine: STDOUT: 
	I0728 18:46:26.195866    6433 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:26.195894    6433 fix.go:56] duration metric: took 14.394333ms for fixHost
	I0728 18:46:26.195899    6433 start.go:83] releasing machines lock for "embed-certs-593000", held for 14.41075ms
	W0728 18:46:26.195904    6433 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:46:26.195954    6433 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:26.195959    6433 start.go:729] Will try again in 5 seconds ...
	I0728 18:46:31.198083    6433 start.go:360] acquireMachinesLock for embed-certs-593000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:31.755440    6433 start.go:364] duration metric: took 557.253833ms to acquireMachinesLock for "embed-certs-593000"
	I0728 18:46:31.755615    6433 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:46:31.755642    6433 fix.go:54] fixHost starting: 
	I0728 18:46:31.756346    6433 fix.go:112] recreateIfNeeded on embed-certs-593000: state=Stopped err=<nil>
	W0728 18:46:31.756372    6433 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:46:31.773168    6433 out.go:177] * Restarting existing qemu2 VM for "embed-certs-593000" ...
	I0728 18:46:31.780075    6433 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:31.780322    6433 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:41:ba:02:7b:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/embed-certs-593000/disk.qcow2
	I0728 18:46:31.788983    6433 main.go:141] libmachine: STDOUT: 
	I0728 18:46:31.789055    6433 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:31.789123    6433 fix.go:56] duration metric: took 33.484791ms for fixHost
	I0728 18:46:31.789145    6433 start.go:83] releasing machines lock for "embed-certs-593000", held for 33.655875ms
	W0728 18:46:31.789304    6433 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-593000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-593000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:31.805193    6433 out.go:177] 
	W0728 18:46:31.810338    6433 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:46:31.810426    6433 out.go:239] * 
	* 
	W0728 18:46:31.812495    6433 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:46:31.819190    6433 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-593000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000: exit status 7 (54.20675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-860000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-860000 create -f testdata/busybox.yaml: exit status 1 (31.287209ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-860000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-860000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000: exit status 7 (29.6855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-860000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000: exit status 7 (32.699042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-593000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000: exit status 7 (33.852917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-593000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-593000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-593000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.6335ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-593000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-593000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000: exit status 7 (30.145625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-860000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-860000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-860000 describe deploy/metrics-server -n kube-system: exit status 1 (29.123208ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-860000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-860000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000: exit status 7 (31.416083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-593000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000: exit status 7 (30.782667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-593000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-593000 --alsologtostderr -v=1: exit status 83 (47.734417ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-593000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-593000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:46:32.088735    6468 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:46:32.088863    6468 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:32.088867    6468 out.go:304] Setting ErrFile to fd 2...
	I0728 18:46:32.088869    6468 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:32.088998    6468 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:46:32.089205    6468 out.go:298] Setting JSON to false
	I0728 18:46:32.089211    6468 mustload.go:65] Loading cluster: embed-certs-593000
	I0728 18:46:32.089381    6468 config.go:182] Loaded profile config "embed-certs-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:32.092666    6468 out.go:177] * The control-plane node embed-certs-593000 host is not running: state=Stopped
	I0728 18:46:32.099647    6468 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-593000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-593000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000: exit status 7 (30.716875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-593000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000: exit status 7 (27.878791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-722000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
E0728 18:46:33.711582    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-722000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.874532959s)

                                                
                                                
-- stdout --
	* [newest-cni-722000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-722000" primary control-plane node in "newest-cni-722000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-722000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:46:32.405931    6493 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:46:32.406061    6493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:32.406064    6493 out.go:304] Setting ErrFile to fd 2...
	I0728 18:46:32.406066    6493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:32.406177    6493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:46:32.407255    6493 out.go:298] Setting JSON to false
	I0728 18:46:32.423255    6493 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4563,"bootTime":1722213029,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:46:32.423401    6493 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:46:32.428732    6493 out.go:177] * [newest-cni-722000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:46:32.435774    6493 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:46:32.435819    6493 notify.go:220] Checking for updates...
	I0728 18:46:32.442712    6493 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:46:32.445764    6493 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:46:32.448665    6493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:46:32.451718    6493 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:46:32.454718    6493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:46:32.457943    6493 config.go:182] Loaded profile config "default-k8s-diff-port-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:32.458006    6493 config.go:182] Loaded profile config "multinode-429000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:32.458060    6493 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:46:32.461728    6493 out.go:177] * Using the qemu2 driver based on user configuration
	I0728 18:46:32.468680    6493 start.go:297] selected driver: qemu2
	I0728 18:46:32.468685    6493 start.go:901] validating driver "qemu2" against <nil>
	I0728 18:46:32.468691    6493 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:46:32.470905    6493 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0728 18:46:32.470929    6493 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0728 18:46:32.475703    6493 out.go:177] * Automatically selected the socket_vmnet network
	I0728 18:46:32.482771    6493 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0728 18:46:32.482786    6493 cni.go:84] Creating CNI manager for ""
	I0728 18:46:32.482792    6493 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:46:32.482807    6493 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 18:46:32.482835    6493 start.go:340] cluster config:
	{Name:newest-cni-722000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-722000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:46:32.486539    6493 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:32.494731    6493 out.go:177] * Starting "newest-cni-722000" primary control-plane node in "newest-cni-722000" cluster
	I0728 18:46:32.498696    6493 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0728 18:46:32.498709    6493 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0728 18:46:32.498720    6493 cache.go:56] Caching tarball of preloaded images
	I0728 18:46:32.498782    6493 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:46:32.498788    6493 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0728 18:46:32.498849    6493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/newest-cni-722000/config.json ...
	I0728 18:46:32.498861    6493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/newest-cni-722000/config.json: {Name:mk29d46d83a9b3ff47247c24ce70513fff7375ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 18:46:32.499081    6493 start.go:360] acquireMachinesLock for newest-cni-722000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:32.499117    6493 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "newest-cni-722000"
	I0728 18:46:32.499130    6493 start.go:93] Provisioning new machine with config: &{Name:newest-cni-722000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-722000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:46:32.499160    6493 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:46:32.507685    6493 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:46:32.525532    6493 start.go:159] libmachine.API.Create for "newest-cni-722000" (driver="qemu2")
	I0728 18:46:32.525562    6493 client.go:168] LocalClient.Create starting
	I0728 18:46:32.525634    6493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:46:32.525664    6493 main.go:141] libmachine: Decoding PEM data...
	I0728 18:46:32.525674    6493 main.go:141] libmachine: Parsing certificate...
	I0728 18:46:32.525711    6493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:46:32.525738    6493 main.go:141] libmachine: Decoding PEM data...
	I0728 18:46:32.525744    6493 main.go:141] libmachine: Parsing certificate...
	I0728 18:46:32.526099    6493 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:46:32.673642    6493 main.go:141] libmachine: Creating SSH key...
	I0728 18:46:32.797001    6493 main.go:141] libmachine: Creating Disk image...
	I0728 18:46:32.797012    6493 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:46:32.797233    6493 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/disk.qcow2
	I0728 18:46:32.806255    6493 main.go:141] libmachine: STDOUT: 
	I0728 18:46:32.806281    6493 main.go:141] libmachine: STDERR: 
	I0728 18:46:32.806334    6493 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/disk.qcow2 +20000M
	I0728 18:46:32.814130    6493 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:46:32.814143    6493 main.go:141] libmachine: STDERR: 
	I0728 18:46:32.814159    6493 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/disk.qcow2
	I0728 18:46:32.814165    6493 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:46:32.814178    6493 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:32.814212    6493 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:a4:e6:73:2e:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/disk.qcow2
	I0728 18:46:32.815822    6493 main.go:141] libmachine: STDOUT: 
	I0728 18:46:32.815838    6493 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:32.815853    6493 client.go:171] duration metric: took 290.290292ms to LocalClient.Create
	I0728 18:46:34.818112    6493 start.go:128] duration metric: took 2.318952083s to createHost
	I0728 18:46:34.818173    6493 start.go:83] releasing machines lock for "newest-cni-722000", held for 2.31907075s
	W0728 18:46:34.818228    6493 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:34.835577    6493 out.go:177] * Deleting "newest-cni-722000" in qemu2 ...
	W0728 18:46:34.864168    6493 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:34.864197    6493 start.go:729] Will try again in 5 seconds ...
	I0728 18:46:39.866289    6493 start.go:360] acquireMachinesLock for newest-cni-722000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:39.880415    6493 start.go:364] duration metric: took 14.019458ms to acquireMachinesLock for "newest-cni-722000"
	I0728 18:46:39.880472    6493 start.go:93] Provisioning new machine with config: &{Name:newest-cni-722000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-722000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 18:46:39.880760    6493 start.go:125] createHost starting for "" (driver="qemu2")
	I0728 18:46:39.889233    6493 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0728 18:46:39.935535    6493 start.go:159] libmachine.API.Create for "newest-cni-722000" (driver="qemu2")
	I0728 18:46:39.935576    6493 client.go:168] LocalClient.Create starting
	I0728 18:46:39.935718    6493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/ca.pem
	I0728 18:46:39.935789    6493 main.go:141] libmachine: Decoding PEM data...
	I0728 18:46:39.935806    6493 main.go:141] libmachine: Parsing certificate...
	I0728 18:46:39.935875    6493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1229/.minikube/certs/cert.pem
	I0728 18:46:39.935919    6493 main.go:141] libmachine: Decoding PEM data...
	I0728 18:46:39.935931    6493 main.go:141] libmachine: Parsing certificate...
	I0728 18:46:39.936429    6493 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0728 18:46:40.095581    6493 main.go:141] libmachine: Creating SSH key...
	I0728 18:46:40.191477    6493 main.go:141] libmachine: Creating Disk image...
	I0728 18:46:40.191485    6493 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0728 18:46:40.191657    6493 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/disk.qcow2
	I0728 18:46:40.201160    6493 main.go:141] libmachine: STDOUT: 
	I0728 18:46:40.201182    6493 main.go:141] libmachine: STDERR: 
	I0728 18:46:40.201253    6493 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/disk.qcow2 +20000M
	I0728 18:46:40.210113    6493 main.go:141] libmachine: STDOUT: Image resized.
	
	I0728 18:46:40.210136    6493 main.go:141] libmachine: STDERR: 
	I0728 18:46:40.210156    6493 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/disk.qcow2
	I0728 18:46:40.210164    6493 main.go:141] libmachine: Starting QEMU VM...
	I0728 18:46:40.210175    6493 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:40.210203    6493 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:20:18:0f:4a:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/disk.qcow2
	I0728 18:46:40.212809    6493 main.go:141] libmachine: STDOUT: 
	I0728 18:46:40.212834    6493 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:40.212851    6493 client.go:171] duration metric: took 277.273542ms to LocalClient.Create
	I0728 18:46:42.215107    6493 start.go:128] duration metric: took 2.334269208s to createHost
	I0728 18:46:42.215167    6493 start.go:83] releasing machines lock for "newest-cni-722000", held for 2.334748959s
	W0728 18:46:42.215538    6493 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-722000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-722000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:42.225158    6493 out.go:177] 
	W0728 18:46:42.229237    6493 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:46:42.229276    6493 out.go:239] * 
	* 
	W0728 18:46:42.231728    6493 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:46:42.244151    6493 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-722000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-722000 -n newest-cni-722000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-722000 -n newest-cni-722000: exit status 7 (66.466125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-722000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-860000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-860000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.84186925s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-860000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-860000" primary control-plane node in "default-k8s-diff-port-860000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-860000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-860000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:46:34.108388    6513 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:46:34.108756    6513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:34.108761    6513 out.go:304] Setting ErrFile to fd 2...
	I0728 18:46:34.108764    6513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:34.108959    6513 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:46:34.110268    6513 out.go:298] Setting JSON to false
	I0728 18:46:34.126474    6513 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4565,"bootTime":1722213029,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:46:34.126538    6513 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:46:34.131545    6513 out.go:177] * [default-k8s-diff-port-860000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:46:34.140473    6513 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:46:34.140537    6513 notify.go:220] Checking for updates...
	I0728 18:46:34.147408    6513 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:46:34.150439    6513 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:46:34.153491    6513 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:46:34.156427    6513 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:46:34.159457    6513 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:46:34.162747    6513 config.go:182] Loaded profile config "default-k8s-diff-port-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:34.163004    6513 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:46:34.167385    6513 out.go:177] * Using the qemu2 driver based on existing profile
	I0728 18:46:34.174482    6513 start.go:297] selected driver: qemu2
	I0728 18:46:34.174489    6513 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:46:34.174564    6513 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:46:34.176827    6513 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 18:46:34.176867    6513 cni.go:84] Creating CNI manager for ""
	I0728 18:46:34.176875    6513 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:46:34.176911    6513 start.go:340] cluster config:
	{Name:default-k8s-diff-port-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-860000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:46:34.180331    6513 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:34.189409    6513 out.go:177] * Starting "default-k8s-diff-port-860000" primary control-plane node in "default-k8s-diff-port-860000" cluster
	I0728 18:46:34.193437    6513 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 18:46:34.193453    6513 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 18:46:34.193464    6513 cache.go:56] Caching tarball of preloaded images
	I0728 18:46:34.193536    6513 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:46:34.193548    6513 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 18:46:34.193622    6513 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/default-k8s-diff-port-860000/config.json ...
	I0728 18:46:34.194131    6513 start.go:360] acquireMachinesLock for default-k8s-diff-port-860000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:34.818320    6513 start.go:364] duration metric: took 624.174375ms to acquireMachinesLock for "default-k8s-diff-port-860000"
	I0728 18:46:34.818417    6513 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:46:34.818443    6513 fix.go:54] fixHost starting: 
	I0728 18:46:34.819086    6513 fix.go:112] recreateIfNeeded on default-k8s-diff-port-860000: state=Stopped err=<nil>
	W0728 18:46:34.819133    6513 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:46:34.823663    6513 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-860000" ...
	I0728 18:46:34.838616    6513 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:34.838903    6513 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:5b:dc:02:00:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/disk.qcow2
	I0728 18:46:34.849232    6513 main.go:141] libmachine: STDOUT: 
	I0728 18:46:34.849294    6513 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:34.849409    6513 fix.go:56] duration metric: took 30.973041ms for fixHost
	I0728 18:46:34.849427    6513 start.go:83] releasing machines lock for "default-k8s-diff-port-860000", held for 31.046083ms
	W0728 18:46:34.849456    6513 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:46:34.849618    6513 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:34.849636    6513 start.go:729] Will try again in 5 seconds ...
	I0728 18:46:39.851870    6513 start.go:360] acquireMachinesLock for default-k8s-diff-port-860000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:39.852359    6513 start.go:364] duration metric: took 375.5µs to acquireMachinesLock for "default-k8s-diff-port-860000"
	I0728 18:46:39.852898    6513 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:46:39.852922    6513 fix.go:54] fixHost starting: 
	I0728 18:46:39.853725    6513 fix.go:112] recreateIfNeeded on default-k8s-diff-port-860000: state=Stopped err=<nil>
	W0728 18:46:39.853756    6513 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:46:39.867218    6513 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-860000" ...
	I0728 18:46:39.870199    6513 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:39.870458    6513 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:5b:dc:02:00:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/default-k8s-diff-port-860000/disk.qcow2
	I0728 18:46:39.880134    6513 main.go:141] libmachine: STDOUT: 
	I0728 18:46:39.880202    6513 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:39.880315    6513 fix.go:56] duration metric: took 27.392209ms for fixHost
	I0728 18:46:39.880338    6513 start.go:83] releasing machines lock for "default-k8s-diff-port-860000", held for 27.952833ms
	W0728 18:46:39.880531    6513 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-860000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-860000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:39.897235    6513 out.go:177] 
	W0728 18:46:39.901220    6513 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:46:39.901241    6513 out.go:239] * 
	* 
	W0728 18:46:39.902820    6513 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:46:39.913237    6513 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-860000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000: exit status 7 (46.373333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-860000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000: exit status 7 (34.521833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-860000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-860000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-860000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.386667ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-860000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-860000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000: exit status 7 (33.127208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-860000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000: exit status 7 (29.99925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-860000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-860000 --alsologtostderr -v=1: exit status 83 (42.604542ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-860000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-860000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:46:40.177351    6533 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:46:40.177501    6533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:40.177505    6533 out.go:304] Setting ErrFile to fd 2...
	I0728 18:46:40.177507    6533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:40.177641    6533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:46:40.177864    6533 out.go:298] Setting JSON to false
	I0728 18:46:40.177873    6533 mustload.go:65] Loading cluster: default-k8s-diff-port-860000
	I0728 18:46:40.178124    6533 config.go:182] Loaded profile config "default-k8s-diff-port-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 18:46:40.182223    6533 out.go:177] * The control-plane node default-k8s-diff-port-860000 host is not running: state=Stopped
	I0728 18:46:40.186222    6533 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-860000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-860000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000: exit status 7 (28.79775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-860000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000: exit status 7 (28.577917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-722000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-722000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.18992625s)

                                                
                                                
-- stdout --
	* [newest-cni-722000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-722000" primary control-plane node in "newest-cni-722000" cluster
	* Restarting existing qemu2 VM for "newest-cni-722000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-722000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:46:45.420785    6579 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:46:45.420922    6579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:45.420926    6579 out.go:304] Setting ErrFile to fd 2...
	I0728 18:46:45.420928    6579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:45.421063    6579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:46:45.422076    6579 out.go:298] Setting JSON to false
	I0728 18:46:45.437837    6579 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4576,"bootTime":1722213029,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 18:46:45.437898    6579 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 18:46:45.443186    6579 out.go:177] * [newest-cni-722000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 18:46:45.450218    6579 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 18:46:45.450267    6579 notify.go:220] Checking for updates...
	I0728 18:46:45.457119    6579 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 18:46:45.460108    6579 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 18:46:45.463177    6579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 18:46:45.466137    6579 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 18:46:45.474110    6579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 18:46:45.477503    6579 config.go:182] Loaded profile config "newest-cni-722000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0728 18:46:45.477757    6579 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 18:46:45.482099    6579 out.go:177] * Using the qemu2 driver based on existing profile
	I0728 18:46:45.489127    6579 start.go:297] selected driver: qemu2
	I0728 18:46:45.489134    6579 start.go:901] validating driver "qemu2" against &{Name:newest-cni-722000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-722000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:46:45.489184    6579 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 18:46:45.491434    6579 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0728 18:46:45.491454    6579 cni.go:84] Creating CNI manager for ""
	I0728 18:46:45.491463    6579 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 18:46:45.491493    6579 start.go:340] cluster config:
	{Name:newest-cni-722000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-722000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 18:46:45.495082    6579 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 18:46:45.503146    6579 out.go:177] * Starting "newest-cni-722000" primary control-plane node in "newest-cni-722000" cluster
	I0728 18:46:45.507133    6579 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0728 18:46:45.507149    6579 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0728 18:46:45.507162    6579 cache.go:56] Caching tarball of preloaded images
	I0728 18:46:45.507234    6579 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0728 18:46:45.507241    6579 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0728 18:46:45.507317    6579 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/newest-cni-722000/config.json ...
	I0728 18:46:45.507768    6579 start.go:360] acquireMachinesLock for newest-cni-722000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:45.507803    6579 start.go:364] duration metric: took 28.209µs to acquireMachinesLock for "newest-cni-722000"
	I0728 18:46:45.507812    6579 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:46:45.507819    6579 fix.go:54] fixHost starting: 
	I0728 18:46:45.507935    6579 fix.go:112] recreateIfNeeded on newest-cni-722000: state=Stopped err=<nil>
	W0728 18:46:45.507943    6579 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:46:45.511164    6579 out.go:177] * Restarting existing qemu2 VM for "newest-cni-722000" ...
	I0728 18:46:45.519081    6579 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:45.519115    6579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:20:18:0f:4a:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/disk.qcow2
	I0728 18:46:45.521081    6579 main.go:141] libmachine: STDOUT: 
	I0728 18:46:45.521101    6579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:45.521134    6579 fix.go:56] duration metric: took 13.315292ms for fixHost
	I0728 18:46:45.521139    6579 start.go:83] releasing machines lock for "newest-cni-722000", held for 13.332167ms
	W0728 18:46:45.521144    6579 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:46:45.521172    6579 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:45.521177    6579 start.go:729] Will try again in 5 seconds ...
	I0728 18:46:50.523277    6579 start.go:360] acquireMachinesLock for newest-cni-722000: {Name:mke33c2035b2a37afbdc6ad39fc6cda504f6c48f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0728 18:46:50.523781    6579 start.go:364] duration metric: took 416.125µs to acquireMachinesLock for "newest-cni-722000"
	I0728 18:46:50.523928    6579 start.go:96] Skipping create...Using existing machine configuration
	I0728 18:46:50.523946    6579 fix.go:54] fixHost starting: 
	I0728 18:46:50.524636    6579 fix.go:112] recreateIfNeeded on newest-cni-722000: state=Stopped err=<nil>
	W0728 18:46:50.524667    6579 fix.go:138] unexpected machine state, will restart: <nil>
	I0728 18:46:50.533905    6579 out.go:177] * Restarting existing qemu2 VM for "newest-cni-722000" ...
	I0728 18:46:50.537952    6579 qemu.go:418] Using hvf for hardware acceleration
	I0728 18:46:50.538193    6579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:20:18:0f:4a:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1229/.minikube/machines/newest-cni-722000/disk.qcow2
	I0728 18:46:50.546915    6579 main.go:141] libmachine: STDOUT: 
	I0728 18:46:50.546986    6579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0728 18:46:50.547073    6579 fix.go:56] duration metric: took 23.125334ms for fixHost
	I0728 18:46:50.547097    6579 start.go:83] releasing machines lock for "newest-cni-722000", held for 23.258417ms
	W0728 18:46:50.547293    6579 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-722000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-722000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0728 18:46:50.555904    6579 out.go:177] 
	W0728 18:46:50.560020    6579 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0728 18:46:50.560043    6579 out.go:239] * 
	* 
	W0728 18:46:50.562553    6579 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 18:46:50.569907    6579 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-722000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-722000 -n newest-cni-722000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-722000 -n newest-cni-722000: exit status 7 (68.100542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-722000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-722000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-722000 -n newest-cni-722000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-722000 -n newest-cni-722000: exit status 7 (29.209209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-722000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-722000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-722000 --alsologtostderr -v=1: exit status 83 (40.431583ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-722000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-722000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 18:46:50.749116    6593 out.go:291] Setting OutFile to fd 1 ...
	I0728 18:46:50.749260    6593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:50.749265    6593 out.go:304] Setting ErrFile to fd 2...
	I0728 18:46:50.749268    6593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 18:46:50.749411    6593 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 18:46:50.749626    6593 out.go:298] Setting JSON to false
	I0728 18:46:50.749633    6593 mustload.go:65] Loading cluster: newest-cni-722000
	I0728 18:46:50.749833    6593 config.go:182] Loaded profile config "newest-cni-722000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0728 18:46:50.753674    6593 out.go:177] * The control-plane node newest-cni-722000 host is not running: state=Stopped
	I0728 18:46:50.757627    6593 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-722000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-722000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-722000 -n newest-cni-722000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-722000 -n newest-cni-722000: exit status 7 (28.794166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-722000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-722000 -n newest-cni-722000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-722000 -n newest-cni-722000: exit status 7 (29.658583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-722000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (161/278)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.30.3/json-events 11.81
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 12.89
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.32
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 206.07
38 TestAddons/serial/Volcano 38.92
40 TestAddons/serial/GCPAuth/Namespaces 0.08
42 TestAddons/parallel/Registry 13.64
43 TestAddons/parallel/Ingress 17.99
44 TestAddons/parallel/InspektorGadget 10.25
45 TestAddons/parallel/MetricsServer 5.25
48 TestAddons/parallel/CSI 53.19
49 TestAddons/parallel/Headlamp 18.53
50 TestAddons/parallel/CloudSpanner 5.17
51 TestAddons/parallel/LocalPath 51.78
52 TestAddons/parallel/NvidiaDevicePlugin 5.15
53 TestAddons/parallel/Yakd 10.22
54 TestAddons/StoppedEnableDisable 12.38
62 TestHyperKitDriverInstallOrUpdate 11.06
65 TestErrorSpam/setup 35.18
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.24
68 TestErrorSpam/pause 0.65
69 TestErrorSpam/unpause 0.61
70 TestErrorSpam/stop 55.31
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 49.12
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 37.66
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.04
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.53
82 TestFunctional/serial/CacheCmd/cache/add_local 1.11
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.6
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.66
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.92
90 TestFunctional/serial/ExtraConfig 37.44
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.64
93 TestFunctional/serial/LogsFileCmd 0.61
94 TestFunctional/serial/InvalidService 3.78
96 TestFunctional/parallel/ConfigCmd 0.22
97 TestFunctional/parallel/DashboardCmd 7.94
98 TestFunctional/parallel/DryRun 0.22
99 TestFunctional/parallel/InternationalLanguage 0.11
100 TestFunctional/parallel/StatusCmd 0.24
105 TestFunctional/parallel/AddonsCmd 0.1
106 TestFunctional/parallel/PersistentVolumeClaim 25.41
108 TestFunctional/parallel/SSHCmd 0.12
109 TestFunctional/parallel/CpCmd 0.39
111 TestFunctional/parallel/FileSync 0.06
112 TestFunctional/parallel/CertSync 0.37
116 TestFunctional/parallel/NodeLabels 0.05
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.09
120 TestFunctional/parallel/License 0.32
121 TestFunctional/parallel/Version/short 0.03
122 TestFunctional/parallel/Version/components 0.15
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
127 TestFunctional/parallel/ImageCommands/ImageBuild 1.91
128 TestFunctional/parallel/ImageCommands/Setup 1.8
129 TestFunctional/parallel/DockerEnv/bash 0.34
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
133 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.46
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.15
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.26
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.19
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
146 TestFunctional/parallel/ServiceCmd/List 0.08
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
149 TestFunctional/parallel/ServiceCmd/Format 0.09
150 TestFunctional/parallel/ServiceCmd/URL 0.09
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.12
158 TestFunctional/parallel/ProfileCmd/profile_list 0.12
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
160 TestFunctional/parallel/MountCmd/any-port 4.14
161 TestFunctional/parallel/MountCmd/specific-port 1.19
162 TestFunctional/parallel/MountCmd/VerifyCleanup 1.53
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 202.23
170 TestMultiControlPlane/serial/DeployApp 4.46
171 TestMultiControlPlane/serial/PingHostFromPods 0.76
172 TestMultiControlPlane/serial/AddWorkerNode 168.61
173 TestMultiControlPlane/serial/NodeLabels 0.18
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.25
175 TestMultiControlPlane/serial/CopyFile 4.52
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 150.06
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 3.51
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.2
217 TestMainNoArgs 0.03
264 TestStoppedBinaryUpgrade/Setup 1.41
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
281 TestNoKubernetes/serial/ProfileList 31.36
282 TestNoKubernetes/serial/Stop 2.14
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
294 TestStoppedBinaryUpgrade/MinikubeLogs 0.75
299 TestStartStop/group/old-k8s-version/serial/Stop 3.75
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
310 TestStartStop/group/no-preload/serial/Stop 2.07
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
323 TestStartStop/group/embed-certs/serial/Stop 3.62
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.86
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.05
343 TestStartStop/group/newest-cni/serial/Stop 2.89
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-504000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-504000: exit status 85 (96.429708ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-504000 | jenkins | v1.33.1 | 28 Jul 24 17:45 PDT |          |
	|         | -p download-only-504000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/28 17:45:49
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 17:45:49.509310    1730 out.go:291] Setting OutFile to fd 1 ...
	I0728 17:45:49.509449    1730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:45:49.509452    1730 out.go:304] Setting ErrFile to fd 2...
	I0728 17:45:49.509454    1730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:45:49.509581    1730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	W0728 17:45:49.509662    1730 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19312-1229/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19312-1229/.minikube/config/config.json: no such file or directory
	I0728 17:45:49.510980    1730 out.go:298] Setting JSON to true
	I0728 17:45:49.530267    1730 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":920,"bootTime":1722213029,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 17:45:49.530328    1730 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 17:45:49.536111    1730 out.go:97] [download-only-504000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 17:45:49.536249    1730 notify.go:220] Checking for updates...
	W0728 17:45:49.536258    1730 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball: no such file or directory
	I0728 17:45:49.540046    1730 out.go:169] MINIKUBE_LOCATION=19312
	I0728 17:45:49.543047    1730 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 17:45:49.547052    1730 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 17:45:49.550039    1730 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 17:45:49.553033    1730 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	W0728 17:45:49.559006    1730 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0728 17:45:49.559203    1730 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 17:45:49.565180    1730 out.go:97] Using the qemu2 driver based on user configuration
	I0728 17:45:49.565195    1730 start.go:297] selected driver: qemu2
	I0728 17:45:49.565208    1730 start.go:901] validating driver "qemu2" against <nil>
	I0728 17:45:49.565260    1730 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 17:45:49.569072    1730 out.go:169] Automatically selected the socket_vmnet network
	I0728 17:45:49.574767    1730 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0728 17:45:49.574855    1730 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0728 17:45:49.574904    1730 cni.go:84] Creating CNI manager for ""
	I0728 17:45:49.574922    1730 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0728 17:45:49.574975    1730 start.go:340] cluster config:
	{Name:download-only-504000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:45:49.580610    1730 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:45:49.585383    1730 out.go:97] Downloading VM boot image ...
	I0728 17:45:49.585409    1730 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0728 17:45:55.808779    1730 out.go:97] Starting "download-only-504000" primary control-plane node in "download-only-504000" cluster
	I0728 17:45:55.808800    1730 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0728 17:45:55.868981    1730 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0728 17:45:55.868998    1730 cache.go:56] Caching tarball of preloaded images
	I0728 17:45:55.869173    1730 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0728 17:45:55.873742    1730 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0728 17:45:55.873750    1730 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0728 17:45:55.952849    1730 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0728 17:46:02.926828    1730 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0728 17:46:02.927014    1730 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0728 17:46:03.627511    1730 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0728 17:46:03.627715    1730 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/download-only-504000/config.json ...
	I0728 17:46:03.627732    1730 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/download-only-504000/config.json: {Name:mkc1eb2c526791a45f2480b9b9e481cfc6c3a312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 17:46:03.628017    1730 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0728 17:46:03.628214    1730 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0728 17:46:04.312164    1730 out.go:169] 
	W0728 17:46:04.319238    1730 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19312-1229/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104b79a80 0x104b79a80 0x104b79a80 0x104b79a80 0x104b79a80 0x104b79a80 0x104b79a80] Decompressors:map[bz2:0x140004ba6b0 gz:0x140004ba6b8 tar:0x140004ba600 tar.bz2:0x140004ba620 tar.gz:0x140004ba630 tar.xz:0x140004ba650 tar.zst:0x140004ba680 tbz2:0x140004ba620 tgz:0x140004ba630 txz:0x140004ba650 tzst:0x140004ba680 xz:0x140004ba6c0 zip:0x140004ba6d0 zst:0x140004ba6c8] Getters:map[file:0x1400078ab70 http:0x14000178eb0 https:0x14000178f50] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0728 17:46:04.319272    1730 out_reason.go:110] 
	W0728 17:46:04.324690    1730 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 17:46:04.327620    1730 out.go:169] 
	
	
	* The control-plane node download-only-504000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-504000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-504000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (11.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-329000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-329000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (11.814197667s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (11.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-329000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-329000: exit status 85 (78.177083ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-504000 | jenkins | v1.33.1 | 28 Jul 24 17:45 PDT |                     |
	|         | -p download-only-504000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT | 28 Jul 24 17:46 PDT |
	| delete  | -p download-only-504000        | download-only-504000 | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT | 28 Jul 24 17:46 PDT |
	| start   | -o=json --download-only        | download-only-329000 | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT |                     |
	|         | -p download-only-329000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/28 17:46:04
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 17:46:04.748284    1757 out.go:291] Setting OutFile to fd 1 ...
	I0728 17:46:04.748411    1757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:46:04.748415    1757 out.go:304] Setting ErrFile to fd 2...
	I0728 17:46:04.748418    1757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:46:04.748537    1757 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 17:46:04.749682    1757 out.go:298] Setting JSON to true
	I0728 17:46:04.767464    1757 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":935,"bootTime":1722213029,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 17:46:04.767578    1757 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 17:46:04.772597    1757 out.go:97] [download-only-329000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 17:46:04.772726    1757 notify.go:220] Checking for updates...
	I0728 17:46:04.777073    1757 out.go:169] MINIKUBE_LOCATION=19312
	I0728 17:46:04.780254    1757 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 17:46:04.784714    1757 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 17:46:04.787602    1757 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 17:46:04.790526    1757 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	W0728 17:46:04.796756    1757 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0728 17:46:04.796924    1757 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 17:46:04.800695    1757 out.go:97] Using the qemu2 driver based on user configuration
	I0728 17:46:04.800703    1757 start.go:297] selected driver: qemu2
	I0728 17:46:04.800706    1757 start.go:901] validating driver "qemu2" against <nil>
	I0728 17:46:04.800752    1757 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 17:46:04.803547    1757 out.go:169] Automatically selected the socket_vmnet network
	I0728 17:46:04.808985    1757 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0728 17:46:04.809080    1757 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0728 17:46:04.809099    1757 cni.go:84] Creating CNI manager for ""
	I0728 17:46:04.809110    1757 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 17:46:04.809115    1757 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 17:46:04.809162    1757 start.go:340] cluster config:
	{Name:download-only-329000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-329000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:46:04.812629    1757 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:46:04.814235    1757 out.go:97] Starting "download-only-329000" primary control-plane node in "download-only-329000" cluster
	I0728 17:46:04.814242    1757 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 17:46:04.868956    1757 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 17:46:04.868967    1757 cache.go:56] Caching tarball of preloaded images
	I0728 17:46:04.869103    1757 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 17:46:04.874500    1757 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0728 17:46:04.874508    1757 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0728 17:46:04.956916    1757 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0728 17:46:10.490045    1757 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0728 17:46:10.490216    1757 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0728 17:46:11.034018    1757 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0728 17:46:11.034208    1757 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/download-only-329000/config.json ...
	I0728 17:46:11.034223    1757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/download-only-329000/config.json: {Name:mk65b4a49d362fc2d77ffac5f64e1601b22932d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 17:46:11.034445    1757 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0728 17:46:11.035485    1757 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-329000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-329000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-329000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (12.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-362000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-362000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (12.890643458s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (12.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-362000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-362000: exit status 85 (79.003416ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-504000 | jenkins | v1.33.1 | 28 Jul 24 17:45 PDT |                     |
	|         | -p download-only-504000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT | 28 Jul 24 17:46 PDT |
	| delete  | -p download-only-504000             | download-only-504000 | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT | 28 Jul 24 17:46 PDT |
	| start   | -o=json --download-only             | download-only-329000 | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT |                     |
	|         | -p download-only-329000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT | 28 Jul 24 17:46 PDT |
	| delete  | -p download-only-329000             | download-only-329000 | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT | 28 Jul 24 17:46 PDT |
	| start   | -o=json --download-only             | download-only-362000 | jenkins | v1.33.1 | 28 Jul 24 17:46 PDT |                     |
	|         | -p download-only-362000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/28 17:46:16
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 17:46:16.848701    1781 out.go:291] Setting OutFile to fd 1 ...
	I0728 17:46:16.848842    1781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:46:16.848845    1781 out.go:304] Setting ErrFile to fd 2...
	I0728 17:46:16.848848    1781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:46:16.848976    1781 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 17:46:16.849952    1781 out.go:298] Setting JSON to true
	I0728 17:46:16.868302    1781 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":947,"bootTime":1722213029,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 17:46:16.868365    1781 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 17:46:16.872491    1781 out.go:97] [download-only-362000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 17:46:16.872571    1781 notify.go:220] Checking for updates...
	I0728 17:46:16.876507    1781 out.go:169] MINIKUBE_LOCATION=19312
	I0728 17:46:16.880640    1781 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 17:46:16.884530    1781 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 17:46:16.887568    1781 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 17:46:16.890590    1781 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	W0728 17:46:16.896576    1781 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0728 17:46:16.896711    1781 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 17:46:16.899568    1781 out.go:97] Using the qemu2 driver based on user configuration
	I0728 17:46:16.899578    1781 start.go:297] selected driver: qemu2
	I0728 17:46:16.899581    1781 start.go:901] validating driver "qemu2" against <nil>
	I0728 17:46:16.899641    1781 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0728 17:46:16.902529    1781 out.go:169] Automatically selected the socket_vmnet network
	I0728 17:46:16.907683    1781 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0728 17:46:16.907782    1781 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0728 17:46:16.907822    1781 cni.go:84] Creating CNI manager for ""
	I0728 17:46:16.907831    1781 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0728 17:46:16.907837    1781 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0728 17:46:16.907869    1781 start.go:340] cluster config:
	{Name:download-only-362000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:46:16.912004    1781 iso.go:125] acquiring lock: {Name:mk50605025c84fd5811356cd56eef3764ba35f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 17:46:16.915602    1781 out.go:97] Starting "download-only-362000" primary control-plane node in "download-only-362000" cluster
	I0728 17:46:16.915608    1781 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0728 17:46:16.967895    1781 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0728 17:46:16.967910    1781 cache.go:56] Caching tarball of preloaded images
	I0728 17:46:16.968060    1781 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0728 17:46:16.972572    1781 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0728 17:46:16.972579    1781 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0728 17:46:17.049903    1781 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0728 17:46:22.754896    1781 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0728 17:46:22.755086    1781 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0728 17:46:23.275227    1781 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0728 17:46:23.275424    1781 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/download-only-362000/config.json ...
	I0728 17:46:23.275440    1781 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/download-only-362000/config.json: {Name:mkb3ff504e4b918beeab6f42d47726c2eedc4d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 17:46:23.275990    1781 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0728 17:46:23.276111    1781 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19312-1229/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-362000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-362000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-362000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.32s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-717000 --alsologtostderr --binary-mirror http://127.0.0.1:49325 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-717000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-717000
--- PASS: TestBinaryMirror (0.32s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-894000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-894000: exit status 85 (54.469917ms)

                                                
                                                
-- stdout --
	* Profile "addons-894000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-894000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-894000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-894000: exit status 85 (58.347208ms)

                                                
                                                
-- stdout --
	* Profile "addons-894000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-894000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (206.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-894000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-894000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m26.06529275s)
--- PASS: TestAddons/Setup (206.07s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.92s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 7.223375ms
addons_test.go:905: volcano-admission stabilized in 7.339875ms
addons_test.go:897: volcano-scheduler stabilized in 7.361292ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-nwpfs" [0f56139e-33b1-4749-b897-a4c5988b53e9] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00370875s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-jzf96" [541419fa-1c03-4eb5-9765-1656437e7964] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004105833s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-92zdh" [2222efe8-cee3-4426-a4ef-083cac61a11a] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003965958s
addons_test.go:932: (dbg) Run:  kubectl --context addons-894000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-894000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-894000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [939dd204-c4d8-4260-a7a2-266161007b58] Pending
helpers_test.go:344: "test-job-nginx-0" [939dd204-c4d8-4260-a7a2-266161007b58] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [939dd204-c4d8-4260-a7a2-266161007b58] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.004217084s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-894000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-894000 addons disable volcano --alsologtostderr -v=1: (9.702711042s)
--- PASS: TestAddons/serial/Volcano (38.92s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-894000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-894000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.198042ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-scdsc" [905afdbc-8a2d-432e-8e9d-04d56c7294c8] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0043145s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dfhtz" [1e9ba6fb-a77a-4777-b221-350b79e9d1a1] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00422675s
addons_test.go:342: (dbg) Run:  kubectl --context addons-894000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-894000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-894000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.337469334s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-894000 ip
2024/07/28 17:51:05 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-894000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.64s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-894000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-894000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-894000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d56b43ca-34b7-490a-9581-b6e2ba6c888e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d56b43ca-34b7-490a-9581-b6e2ba6c888e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003965458s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-894000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-894000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-894000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-894000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-arm64 -p addons-894000 addons disable ingress-dns --alsologtostderr -v=1: (1.2112185s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-894000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-894000 addons disable ingress --alsologtostderr -v=1: (7.2099025s)
--- PASS: TestAddons/parallel/Ingress (17.99s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zjcbn" [a36ac7df-dc71-4795-9199-754858116f23] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003931417s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-894000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-894000: (5.241299s)
--- PASS: TestAddons/parallel/InspektorGadget (10.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.32025ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-tb9cb" [834da5ed-3051-4fd7-9dff-fbdf1bd7384b] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003971959s
addons_test.go:417: (dbg) Run:  kubectl --context addons-894000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-894000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.917083ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-894000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-894000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [992ea9b0-f82a-4799-a16f-15a8010ed280] Pending
helpers_test.go:344: "task-pv-pod" [992ea9b0-f82a-4799-a16f-15a8010ed280] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [992ea9b0-f82a-4799-a16f-15a8010ed280] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004009208s
addons_test.go:590: (dbg) Run:  kubectl --context addons-894000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-894000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-894000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-894000 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-894000 delete pod task-pv-pod: (1.091933625s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-894000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-894000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-894000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [83d0d626-92ce-4893-94ea-949575fe8fc8] Pending
helpers_test.go:344: "task-pv-pod-restore" [83d0d626-92ce-4893-94ea-949575fe8fc8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [83d0d626-92ce-4893-94ea-949575fe8fc8] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003829833s
addons_test.go:632: (dbg) Run:  kubectl --context addons-894000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-894000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-894000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-894000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-894000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.076464375s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-894000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.19s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-894000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-mqxbs" [857b932f-64a1-4d32-ac71-ec8fa3bce4b5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-mqxbs" [857b932f-64a1-4d32-ac71-ec8fa3bce4b5] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003498375s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-894000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-894000 addons disable headlamp --alsologtostderr -v=1: (5.200716917s)
--- PASS: TestAddons/parallel/Headlamp (18.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-lc8kn" [78994183-d65d-42f0-b2d0-0b65b2c4bb84] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003712708s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-894000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-894000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-894000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-894000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d3626897-6c0e-4e16-921e-d4d4573df4ea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d3626897-6c0e-4e16-921e-d4d4573df4ea] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d3626897-6c0e-4e16-921e-d4d4573df4ea] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004034917s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-894000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-894000 ssh "cat /opt/local-path-provisioner/pvc-61258c7d-27f3-47ae-a072-5b06ae7e24fb_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-894000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-894000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-894000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-894000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.321381083s)
--- PASS: TestAddons/parallel/LocalPath (51.78s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-22zcl" [72d16cde-6f97-40d2-895e-74b646b03f34] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004325333s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-894000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-8rmrd" [376df0b5-3b89-496d-9293-00abef6bea62] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003530334s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-894000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-894000 addons disable yakd --alsologtostderr -v=1: (5.211012292s)
--- PASS: TestAddons/parallel/Yakd (10.22s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-894000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-894000: (12.187181917s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-894000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-894000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-894000
--- PASS: TestAddons/StoppedEnableDisable (12.38s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.06s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (11.06s)

                                                
                                    
x
+
TestErrorSpam/setup (35.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-005000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-005000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 --driver=qemu2 : (35.1834635s)
--- PASS: TestErrorSpam/setup (35.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 pause
--- PASS: TestErrorSpam/pause (0.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 unpause
--- PASS: TestErrorSpam/unpause (0.61s)

                                                
                                    
x
+
TestErrorSpam/stop (55.31s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 stop: (3.186619875s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 stop: (26.056813458s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-005000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-005000 stop: (26.059926041s)
--- PASS: TestErrorSpam/stop (55.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19312-1229/.minikube/files/etc/test/nested/copy/1728/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.12s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-843000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0728 17:54:56.769921    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 17:54:56.776796    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 17:54:56.788871    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 17:54:56.810933    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 17:54:56.852982    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 17:54:56.935039    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 17:54:57.097116    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 17:54:57.419270    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 17:54:58.061445    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 17:54:59.343712    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 17:55:01.905934    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-843000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (49.123882333s)
--- PASS: TestFunctional/serial/StartWithProxy (49.12s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.66s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-843000 --alsologtostderr -v=8
E0728 17:55:07.026642    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 17:55:17.269117    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 17:55:37.751504    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-843000 --alsologtostderr -v=8: (37.661708625s)
functional_test.go:663: soft start took 37.662094125s for "functional-843000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.66s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-843000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-843000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local758708080/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 cache add minikube-local-cache-test:functional-843000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 cache delete minikube-local-cache-test:functional-843000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-843000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-843000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (65.188833ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 kubectl -- --context functional-843000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-843000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-843000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0728 17:56:18.713811    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-843000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.442928708s)
functional_test.go:761: restart took 37.443022833s for "functional-843000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-843000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd888405210/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.78s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-843000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-843000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-843000: exit status 115 (99.024541ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30695 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-843000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.78s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-843000 config get cpus: exit status 14 (30.61375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-843000 config get cpus: exit status 14 (30.304375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-843000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-843000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2673: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.94s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-843000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-843000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (116.844334ms)

                                                
                                                
-- stdout --
	* [functional-843000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 17:57:18.107192    2656 out.go:291] Setting OutFile to fd 1 ...
	I0728 17:57:18.107317    2656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:57:18.107321    2656 out.go:304] Setting ErrFile to fd 2...
	I0728 17:57:18.107323    2656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:57:18.107460    2656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 17:57:18.108666    2656 out.go:298] Setting JSON to false
	I0728 17:57:18.124936    2656 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1609,"bootTime":1722213029,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 17:57:18.125006    2656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 17:57:18.130643    2656 out.go:177] * [functional-843000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0728 17:57:18.138598    2656 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 17:57:18.138596    2656 notify.go:220] Checking for updates...
	I0728 17:57:18.145570    2656 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 17:57:18.149589    2656 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 17:57:18.152575    2656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 17:57:18.155593    2656 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 17:57:18.158637    2656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 17:57:18.161852    2656 config.go:182] Loaded profile config "functional-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 17:57:18.162092    2656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 17:57:18.166620    2656 out.go:177] * Using the qemu2 driver based on existing profile
	I0728 17:57:18.173590    2656 start.go:297] selected driver: qemu2
	I0728 17:57:18.173595    2656 start.go:901] validating driver "qemu2" against &{Name:functional-843000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-843000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:57:18.173637    2656 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 17:57:18.179642    2656 out.go:177] 
	W0728 17:57:18.183423    2656 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0728 17:57:18.187601    2656 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-843000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-843000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-843000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (107.666042ms)

                                                
                                                
-- stdout --
	* [functional-843000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 17:57:18.322956    2667 out.go:291] Setting OutFile to fd 1 ...
	I0728 17:57:18.323086    2667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:57:18.323089    2667 out.go:304] Setting ErrFile to fd 2...
	I0728 17:57:18.323091    2667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0728 17:57:18.323225    2667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
	I0728 17:57:18.324654    2667 out.go:298] Setting JSON to false
	I0728 17:57:18.341567    2667 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1609,"bootTime":1722213029,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0728 17:57:18.341653    2667 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0728 17:57:18.345591    2667 out.go:177] * [functional-843000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0728 17:57:18.352670    2667 out.go:177]   - MINIKUBE_LOCATION=19312
	I0728 17:57:18.352723    2667 notify.go:220] Checking for updates...
	I0728 17:57:18.359571    2667 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	I0728 17:57:18.362643    2667 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0728 17:57:18.366559    2667 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 17:57:18.369608    2667 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	I0728 17:57:18.372633    2667 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0728 17:57:18.375823    2667 config.go:182] Loaded profile config "functional-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0728 17:57:18.376070    2667 driver.go:392] Setting default libvirt URI to qemu:///system
	I0728 17:57:18.379605    2667 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0728 17:57:18.385596    2667 start.go:297] selected driver: qemu2
	I0728 17:57:18.385602    2667 start.go:901] validating driver "qemu2" against &{Name:functional-843000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-843000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0728 17:57:18.385681    2667 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 17:57:18.391612    2667 out.go:177] 
	W0728 17:57:18.395667    2667 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0728 17:57:18.399614    2667 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ab87e96d-27bb-4021-84ee-993fd9e22e21] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003322208s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-843000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-843000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-843000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-843000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6197efe5-32e7-444d-88e0-d5ce47174614] Pending
helpers_test.go:344: "sp-pod" [6197efe5-32e7-444d-88e0-d5ce47174614] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6197efe5-32e7-444d-88e0-d5ce47174614] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004200583s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-843000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-843000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-843000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4782121d-185f-4255-a7ca-cf01f39281f5] Pending
helpers_test.go:344: "sp-pod" [4782121d-185f-4255-a7ca-cf01f39281f5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4782121d-185f-4255-a7ca-cf01f39281f5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003750333s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-843000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.41s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh -n functional-843000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 cp functional-843000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2994591700/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh -n functional-843000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh -n functional-843000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1728/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "sudo cat /etc/test/nested/copy/1728/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1728.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "sudo cat /etc/ssl/certs/1728.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1728.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "sudo cat /usr/share/ca-certificates/1728.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/17282.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "sudo cat /etc/ssl/certs/17282.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/17282.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "sudo cat /usr/share/ca-certificates/17282.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-843000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-843000 ssh "sudo systemctl is-active crio": exit status 1 (85.886417ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-843000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-843000
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-843000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-843000 image ls --format short --alsologtostderr:
I0728 17:57:25.085551    2699 out.go:291] Setting OutFile to fd 1 ...
I0728 17:57:25.085703    2699 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 17:57:25.085707    2699 out.go:304] Setting ErrFile to fd 2...
I0728 17:57:25.085710    2699 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 17:57:25.085840    2699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
I0728 17:57:25.086286    2699 config.go:182] Loaded profile config "functional-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 17:57:25.086347    2699 config.go:182] Loaded profile config "functional-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 17:57:25.087161    2699 ssh_runner.go:195] Run: systemctl --version
I0728 17:57:25.087173    2699 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/functional-843000/id_rsa Username:docker}
I0728 17:57:25.109161    2699 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-843000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-843000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-843000 | 3c5b78d5dbf57 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| docker.io/library/nginx                     | latest            | 43b17fe33c4b4 | 193MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-843000 image ls --format table --alsologtostderr:
I0728 17:57:25.230054    2703 out.go:291] Setting OutFile to fd 1 ...
I0728 17:57:25.230228    2703 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 17:57:25.230232    2703 out.go:304] Setting ErrFile to fd 2...
I0728 17:57:25.230235    2703 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 17:57:25.230366    2703 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
I0728 17:57:25.230824    2703 config.go:182] Loaded profile config "functional-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 17:57:25.230889    2703 config.go:182] Loaded profile config "functional-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 17:57:25.231674    2703 ssh_runner.go:195] Run: systemctl --version
I0728 17:57:25.231683    2703 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/functional-843000/id_rsa Username:docker}
I0728 17:57:25.255459    2703 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-843000 image ls --format json --alsologtostderr:
[{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"3c5b78d5dbf57765c9b94e5ea09a7e4a479e5b6df396ad9b7529a3e7a16de715","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-843000"],"size":"30"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b
6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io
/echoserver-arm:1.8"],"size":"85000000"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-843000"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-843000 image ls --format json --alsologtostderr:
I0728 17:57:25.152417    2701 out.go:291] Setting OutFile to fd 1 ...
I0728 17:57:25.152590    2701 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 17:57:25.152593    2701 out.go:304] Setting ErrFile to fd 2...
I0728 17:57:25.152595    2701 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 17:57:25.152749    2701 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
I0728 17:57:25.153264    2701 config.go:182] Loaded profile config "functional-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 17:57:25.153336    2701 config.go:182] Loaded profile config "functional-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 17:57:25.154180    2701 ssh_runner.go:195] Run: systemctl --version
I0728 17:57:25.154189    2701 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/functional-843000/id_rsa Username:docker}
I0728 17:57:25.176540    2701 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-843000 image ls --format yaml --alsologtostderr:
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 3c5b78d5dbf57765c9b94e5ea09a7e4a479e5b6df396ad9b7529a3e7a16de715
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-843000
size: "30"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-843000
size: "4780000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-843000 image ls --format yaml --alsologtostderr:
I0728 17:57:25.018968    2697 out.go:291] Setting OutFile to fd 1 ...
I0728 17:57:25.019138    2697 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 17:57:25.019141    2697 out.go:304] Setting ErrFile to fd 2...
I0728 17:57:25.019143    2697 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 17:57:25.019301    2697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
I0728 17:57:25.019729    2697 config.go:182] Loaded profile config "functional-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 17:57:25.019786    2697 config.go:182] Loaded profile config "functional-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 17:57:25.020622    2697 ssh_runner.go:195] Run: systemctl --version
I0728 17:57:25.020637    2697 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/functional-843000/id_rsa Username:docker}
I0728 17:57:25.042893    2697 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-843000 ssh pgrep buildkitd: exit status 1 (55.8365ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image build -t localhost/my-image:functional-843000 testdata/build --alsologtostderr
2024/07/28 17:57:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-843000 image build -t localhost/my-image:functional-843000 testdata/build --alsologtostderr: (1.787161666s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-843000 image build -t localhost/my-image:functional-843000 testdata/build --alsologtostderr:
I0728 17:57:25.353696    2707 out.go:291] Setting OutFile to fd 1 ...
I0728 17:57:25.354132    2707 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 17:57:25.354137    2707 out.go:304] Setting ErrFile to fd 2...
I0728 17:57:25.354139    2707 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0728 17:57:25.354296    2707 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1229/.minikube/bin
I0728 17:57:25.354727    2707 config.go:182] Loaded profile config "functional-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 17:57:25.355452    2707 config.go:182] Loaded profile config "functional-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0728 17:57:25.356279    2707 ssh_runner.go:195] Run: systemctl --version
I0728 17:57:25.356289    2707 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1229/.minikube/machines/functional-843000/id_rsa Username:docker}
I0728 17:57:25.378559    2707 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3363557457.tar
I0728 17:57:25.378617    2707 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0728 17:57:25.382685    2707 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3363557457.tar
I0728 17:57:25.384653    2707 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3363557457.tar: stat -c "%s %y" /var/lib/minikube/build/build.3363557457.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3363557457.tar': No such file or directory
I0728 17:57:25.384671    2707 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3363557457.tar --> /var/lib/minikube/build/build.3363557457.tar (3072 bytes)
I0728 17:57:25.406241    2707 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3363557457
I0728 17:57:25.417446    2707 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3363557457 -xf /var/lib/minikube/build/build.3363557457.tar
I0728 17:57:25.423058    2707 docker.go:360] Building image: /var/lib/minikube/build/build.3363557457
I0728 17:57:25.423105    2707 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-843000 /var/lib/minikube/build/build.3363557457
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:fe70c64706ee495a9ed96c8b71d1854da237bee4bd6e79ddc775ebf4f788cc08 done
#8 naming to localhost/my-image:functional-843000 done
#8 DONE 0.0s
I0728 17:57:27.032770    2707 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-843000 /var/lib/minikube/build/build.3363557457: (1.609645667s)
I0728 17:57:27.032852    2707 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3363557457
I0728 17:57:27.036667    2707 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3363557457.tar
I0728 17:57:27.039955    2707 build_images.go:217] Built localhost/my-image:functional-843000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3363557457.tar
I0728 17:57:27.039972    2707 build_images.go:133] succeeded building to: functional-843000
I0728 17:57:27.039975    2707 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.790069167s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-843000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-843000 docker-env) && out/minikube-darwin-arm64 status -p functional-843000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-843000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-843000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-843000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-9kdn9" [7effe9a5-ae68-4415-9032-a8acac4fd61f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-9kdn9" [7effe9a5-ae68-4415-9032-a8acac4fd61f] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003548917s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image load --daemon kicbase/echo-server:functional-843000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image load --daemon kicbase/echo-server:functional-843000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-843000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image load --daemon kicbase/echo-server:functional-843000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image save kicbase/echo-server:functional-843000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image rm kicbase/echo-server:functional-843000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-843000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 image save --daemon kicbase/echo-server:functional-843000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-843000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-843000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-843000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-843000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2503: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-843000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-843000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-843000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3c4487de-b33e-4c36-9a81-80b3e06c88fb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3c4487de-b33e-4c36-9a81-80b3e06c88fb] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.002733666s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 service list -o json
functional_test.go:1494: Took "78.751959ms" to run "out/minikube-darwin-arm64 -p functional-843000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:32524
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:32524
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-843000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.92.161 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-843000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "82.540417ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.530292ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "82.762792ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.652417ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-843000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3024374626/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722214631226537000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3024374626/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722214631226537000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3024374626/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722214631226537000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3024374626/001/test-1722214631226537000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 00:57 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 00:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 00:57 test-1722214631226537000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh cat /mount-9p/test-1722214631226537000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-843000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d5cac01c-f45b-4298-b381-448aede45862] Pending
helpers_test.go:344: "busybox-mount" [d5cac01c-f45b-4298-b381-448aede45862] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d5cac01c-f45b-4298-b381-448aede45862] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d5cac01c-f45b-4298-b381-448aede45862] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003458209s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-843000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-843000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3024374626/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-843000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port115686256/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-843000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (58.204ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-843000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port115686256/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-843000 ssh "sudo umount -f /mount-9p": exit status 1 (56.341459ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-843000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-843000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port115686256/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-843000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup456087409/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-843000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup456087409/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-843000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup456087409/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-843000 ssh "findmnt -T" /mount1: exit status 1 (75.614125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-843000 ssh "findmnt -T" /mount3: exit status 1 (53.023958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-843000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-843000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-843000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup456087409/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-843000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup456087409/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-843000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup456087409/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.53s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-843000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-843000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-843000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (202.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-297000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0728 17:57:40.636656    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 17:59:56.771656    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
E0728 18:00:24.478365    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-297000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m22.022797083s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (202.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-297000 -- rollout status deployment/busybox: (2.738567709s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- exec busybox-fc5497c4f-2zqw7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- exec busybox-fc5497c4f-g6jwx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- exec busybox-fc5497c4f-gchpp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- exec busybox-fc5497c4f-2zqw7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- exec busybox-fc5497c4f-g6jwx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- exec busybox-fc5497c4f-gchpp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- exec busybox-fc5497c4f-2zqw7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- exec busybox-fc5497c4f-g6jwx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- exec busybox-fc5497c4f-gchpp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- exec busybox-fc5497c4f-2zqw7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- exec busybox-fc5497c4f-2zqw7 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- exec busybox-fc5497c4f-g6jwx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- exec busybox-fc5497c4f-g6jwx -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- exec busybox-fc5497c4f-gchpp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-297000 -- exec busybox-fc5497c4f-gchpp -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (168.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-297000 -v=7 --alsologtostderr
E0728 18:01:33.759241    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:01:33.765578    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:01:33.777709    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:01:33.799791    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:01:33.841864    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:01:33.923955    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:01:34.086059    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:01:34.406238    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:01:35.048348    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:01:36.330612    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:01:38.892525    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:01:44.014759    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:01:54.256940    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:02:14.739138    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:02:55.700714    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-297000 -v=7 --alsologtostderr: (2m48.377002542s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (168.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-297000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp testdata/cp-test.txt ha-297000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp ha-297000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile565908187/001/cp-test_ha-297000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp ha-297000:/home/docker/cp-test.txt ha-297000-m02:/home/docker/cp-test_ha-297000_ha-297000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m02 "sudo cat /home/docker/cp-test_ha-297000_ha-297000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp ha-297000:/home/docker/cp-test.txt ha-297000-m03:/home/docker/cp-test_ha-297000_ha-297000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m03 "sudo cat /home/docker/cp-test_ha-297000_ha-297000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp ha-297000:/home/docker/cp-test.txt ha-297000-m04:/home/docker/cp-test_ha-297000_ha-297000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m04 "sudo cat /home/docker/cp-test_ha-297000_ha-297000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp testdata/cp-test.txt ha-297000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp ha-297000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile565908187/001/cp-test_ha-297000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp ha-297000-m02:/home/docker/cp-test.txt ha-297000:/home/docker/cp-test_ha-297000-m02_ha-297000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000 "sudo cat /home/docker/cp-test_ha-297000-m02_ha-297000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp ha-297000-m02:/home/docker/cp-test.txt ha-297000-m03:/home/docker/cp-test_ha-297000-m02_ha-297000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m03 "sudo cat /home/docker/cp-test_ha-297000-m02_ha-297000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp ha-297000-m02:/home/docker/cp-test.txt ha-297000-m04:/home/docker/cp-test_ha-297000-m02_ha-297000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m04 "sudo cat /home/docker/cp-test_ha-297000-m02_ha-297000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp testdata/cp-test.txt ha-297000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp ha-297000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile565908187/001/cp-test_ha-297000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp ha-297000-m03:/home/docker/cp-test.txt ha-297000:/home/docker/cp-test_ha-297000-m03_ha-297000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000 "sudo cat /home/docker/cp-test_ha-297000-m03_ha-297000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp ha-297000-m03:/home/docker/cp-test.txt ha-297000-m02:/home/docker/cp-test_ha-297000-m03_ha-297000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m02 "sudo cat /home/docker/cp-test_ha-297000-m03_ha-297000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp ha-297000-m03:/home/docker/cp-test.txt ha-297000-m04:/home/docker/cp-test_ha-297000-m03_ha-297000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m04 "sudo cat /home/docker/cp-test_ha-297000-m03_ha-297000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp testdata/cp-test.txt ha-297000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp ha-297000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile565908187/001/cp-test_ha-297000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp ha-297000-m04:/home/docker/cp-test.txt ha-297000:/home/docker/cp-test_ha-297000-m04_ha-297000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000 "sudo cat /home/docker/cp-test_ha-297000-m04_ha-297000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp ha-297000-m04:/home/docker/cp-test.txt ha-297000-m02:/home/docker/cp-test_ha-297000-m04_ha-297000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m02 "sudo cat /home/docker/cp-test_ha-297000-m04_ha-297000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 cp ha-297000-m04:/home/docker/cp-test.txt ha-297000-m03:/home/docker/cp-test_ha-297000-m04_ha-297000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-297000 ssh -n ha-297000-m03 "sudo cat /home/docker/cp-test_ha-297000-m04_ha-297000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0728 18:17:56.829034    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/functional-843000/client.crt: no such file or directory
E0728 18:19:56.774104    1728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1229/.minikube/profiles/addons-894000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.059491041s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.06s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.51s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-847000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-847000 --output=json --user=testUser: (3.511049917s)
--- PASS: TestJSONOutput/stop/Command (3.51s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-409000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-409000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.739708ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d7b4f175-f436-4720-942f-9fdd05e64729","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-409000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"751d6d3e-59d9-4bd0-98ec-6604d16b991b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"d180d524-3d64-4b61-8ca6-5b2aad5d9ed7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig"}}
	{"specversion":"1.0","id":"5ac47250-070f-4081-96d9-355bec8fb094","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"0ddc9f30-1999-4efd-ad62-4da6dcebea78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8ce9ff04-a4c5-4214-a95e-63d305a9cd99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube"}}
	{"specversion":"1.0","id":"71750f0b-1ccd-43d6-8c1d-7a02401b1a92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"27dd9669-c8e9-4ced-84ce-9ccdd4c62a34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-409000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-409000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-664000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-664000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.735333ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-664000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1229/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1229/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-664000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-664000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.548708ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-664000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-664000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.614129083s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.74878725s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-664000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-664000: (2.136320083s)
--- PASS: TestNoKubernetes/serial/Stop (2.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-664000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-664000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.593792ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-664000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-664000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-278000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-260000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-260000 --alsologtostderr -v=3: (3.747217875s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-260000 -n old-k8s-version-260000: exit status 7 (42.832167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-260000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-933000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-933000 --alsologtostderr -v=3: (2.0730595s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-933000 -n no-preload-933000: exit status 7 (57.779959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-933000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-593000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-593000 --alsologtostderr -v=3: (3.622042167s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-593000 -n embed-certs-593000: exit status 7 (54.835875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-593000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-860000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-860000 --alsologtostderr -v=3: (1.861623041s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-860000 -n default-k8s-diff-port-860000: exit status 7 (62.007584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-860000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-722000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-722000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-722000 --alsologtostderr -v=3: (2.888472792s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-722000 -n newest-cni-722000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-722000 -n newest-cni-722000: exit status 7 (63.587709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-722000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/278)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-496000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-496000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-496000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-496000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-496000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-496000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-496000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-496000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-496000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-496000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-496000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-496000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-496000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-496000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-496000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-496000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-496000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-496000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-496000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-496000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-496000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-496000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-496000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-496000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-496000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-496000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-496000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-496000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-496000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-496000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-496000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-496000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-496000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-496000"

                                                
                                                
----------------------- debugLogs end: cilium-496000 [took: 2.299608375s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-496000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-496000
--- SKIP: TestNetworkPlugins/group/cilium (2.41s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-993000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-993000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard